Merge 4.8-rc6 into staging-next

We need the IIO changes in here for future patches to build on.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2016-09-12 09:18:04 +02:00
commit 8473054e4d
171 changed files with 1051 additions and 667 deletions

View File

@ -88,6 +88,7 @@ Kay Sievers <kay.sievers@vrfy.org>
Kenneth W Chen <kenneth.w.chen@intel.com> Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com> Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
Koushik <raghavendra.koushik@neterion.com> Koushik <raghavendra.koushik@neterion.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Leonid I Ananiev <leonid.i.ananiev@intel.com> Leonid I Ananiev <leonid.i.ananiev@intel.com>

View File

@ -18,13 +18,17 @@ and config2 fields of the perf_event_attr structure. The "events"
directory provides configuration templates for all documented directory provides configuration templates for all documented
events, that can be used with perf tool. For example "xp_valid_flit" events, that can be used with perf tool. For example "xp_valid_flit"
is an equivalent of "type=0x8,event=0x4". Other parameters must be is an equivalent of "type=0x8,event=0x4". Other parameters must be
explicitly specified. For events originating from device, "node" explicitly specified.
defines its index. All crosspoint events require "xp" (index),
"port" (device port number) and "vc" (virtual channel ID) and
"dir" (direction). Watchpoints (special "event" value 0xfe) also
require comparator values ("cmp_l" and "cmp_h") and "mask", being
index of the comparator mask.
For events originating from device, "node" defines its index.
Crosspoint PMU events require "xp" (index), "bus" (bus number)
and "vc" (virtual channel ID).
Crosspoint watchpoint-based events (special "event" value 0xfe)
require "xp" and "vc" as as above plus "port" (device port index),
"dir" (transmit/receive direction), comparator values ("cmp_l"
and "cmp_h") and "mask", being index of the comparator mask.
Masks are defined separately from the event description Masks are defined separately from the event description
(due to limited number of the config values) in the "cmp_mask" (due to limited number of the config values) in the "cmp_mask"
directory, with first 8 configurable by user and additional directory, with first 8 configurable by user and additional

View File

@ -103,7 +103,7 @@ Config Main Menu
Power management options (ACPI, APM) ---> Power management options (ACPI, APM) --->
CPU Frequency scaling ---> CPU Frequency scaling --->
[*] CPU Frequency scaling [*] CPU Frequency scaling
<*> CPU frequency translation statistics [*] CPU frequency translation statistics
[*] CPU frequency translation statistics details [*] CPU frequency translation statistics details

View File

@ -145,6 +145,11 @@ If you want to add slave support to the bus driver:
* Catch the slave interrupts and send appropriate i2c_slave_events to the backend. * Catch the slave interrupts and send appropriate i2c_slave_events to the backend.
Note that most hardware supports being master _and_ slave on the same bus. So,
if you extend a bus driver, please make sure that the driver supports that as
well. In almost all cases, slave support does not need to disable the master
functionality.
Check the i2c-rcar driver as an example. Check the i2c-rcar driver as an example.

View File

@ -1624,7 +1624,7 @@ N: rockchip
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
M: Kukjin Kim <kgene@kernel.org> M: Kukjin Kim <kgene@kernel.org>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained S: Maintained
@ -1644,7 +1644,6 @@ F: drivers/*/*s3c64xx*
F: drivers/*/*s5pv210* F: drivers/*/*s5pv210*
F: drivers/memory/samsung/* F: drivers/memory/samsung/*
F: drivers/soc/samsung/* F: drivers/soc/samsung/*
F: drivers/spi/spi-s3c*
F: Documentation/arm/Samsung/ F: Documentation/arm/Samsung/
F: Documentation/devicetree/bindings/arm/samsung/ F: Documentation/devicetree/bindings/arm/samsung/
F: Documentation/devicetree/bindings/sram/samsung-sram.txt F: Documentation/devicetree/bindings/sram/samsung-sram.txt
@ -1832,6 +1831,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson.git
ARM/UNIPHIER ARCHITECTURE ARM/UNIPHIER ARCHITECTURE
M: Masahiro Yamada <yamada.masahiro@socionext.com> M: Masahiro Yamada <yamada.masahiro@socionext.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-uniphier.git
S: Maintained S: Maintained
F: arch/arm/boot/dts/uniphier* F: arch/arm/boot/dts/uniphier*
F: arch/arm/include/asm/hardware/cache-uniphier.h F: arch/arm/include/asm/hardware/cache-uniphier.h
@ -7472,7 +7472,8 @@ F: Documentation/devicetree/bindings/sound/max9860.txt
F: sound/soc/codecs/max9860.* F: sound/soc/codecs/max9860.*
MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Supported S: Supported
F: drivers/power/max14577_charger.c F: drivers/power/max14577_charger.c
@ -7488,7 +7489,8 @@ F: include/dt-bindings/*/*max77802.h
MAXIM PMIC AND MUIC DRIVERS FOR EXYNOS BASED BOARDS MAXIM PMIC AND MUIC DRIVERS FOR EXYNOS BASED BOARDS
M: Chanwoo Choi <cw00.choi@samsung.com> M: Chanwoo Choi <cw00.choi@samsung.com>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
F: drivers/*/max14577*.c F: drivers/*/max14577*.c
@ -9260,7 +9262,7 @@ F: drivers/pinctrl/sh-pfc/
PIN CONTROLLER - SAMSUNG PIN CONTROLLER - SAMSUNG
M: Tomasz Figa <tomasz.figa@gmail.com> M: Tomasz Figa <tomasz.figa@gmail.com>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
M: Sylwester Nawrocki <s.nawrocki@samsung.com> M: Sylwester Nawrocki <s.nawrocki@samsung.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
@ -10193,7 +10195,7 @@ S: Maintained
F: drivers/platform/x86/samsung-laptop.c F: drivers/platform/x86/samsung-laptop.c
SAMSUNG AUDIO (ASoC) DRIVERS SAMSUNG AUDIO (ASoC) DRIVERS
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
M: Sangbeom Kim <sbkim73@samsung.com> M: Sangbeom Kim <sbkim73@samsung.com>
M: Sylwester Nawrocki <s.nawrocki@samsung.com> M: Sylwester Nawrocki <s.nawrocki@samsung.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers)
@ -10208,7 +10210,8 @@ F: drivers/video/fbdev/s3c-fb.c
SAMSUNG MULTIFUNCTION PMIC DEVICE DRIVERS SAMSUNG MULTIFUNCTION PMIC DEVICE DRIVERS
M: Sangbeom Kim <sbkim73@samsung.com> M: Sangbeom Kim <sbkim73@samsung.com>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com> M: Krzysztof Kozlowski <krzk@kernel.org>
M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org L: linux-samsung-soc@vger.kernel.org
S: Supported S: Supported
@ -10267,6 +10270,17 @@ S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/ F: drivers/clk/samsung/
SAMSUNG SPI DRIVERS
M: Kukjin Kim <kgene@kernel.org>
M: Krzysztof Kozlowski <krzk@kernel.org>
M: Andi Shyti <andi.shyti@samsung.com>
L: linux-spi@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/spi/spi-samsung.txt
F: drivers/spi/spi-s3c*
F: include/linux/platform_data/spi-s3c64xx.h
SAMSUNG SXGBE DRIVERS SAMSUNG SXGBE DRIVERS
M: Byungho An <bh74.an@samsung.com> M: Byungho An <bh74.an@samsung.com>
M: Girish K S <ks.giri@samsung.com> M: Girish K S <ks.giri@samsung.com>

View File

@ -1,7 +1,7 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 8 PATCHLEVEL = 8
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc5 EXTRAVERSION = -rc6
NAME = Psychotic Stoned Sheep NAME = Psychotic Stoned Sheep
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -336,17 +336,6 @@ config HAVE_ARCH_SECCOMP_FILTER
results in the system call being skipped immediately. results in the system call being skipped immediately.
- seccomp syscall wired up - seccomp syscall wired up
For best performance, an arch should use seccomp_phase1 and
seccomp_phase2 directly. It should call seccomp_phase1 for all
syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not
need to be called from a ptrace-safe context. It must then
call seccomp_phase2 if seccomp_phase1 returns anything other
than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP.
As an additional optimization, an arch may provide seccomp_data
directly to seccomp_phase1; this avoids multiple calls
to the syscall_xyz helpers for every syscall.
config SECCOMP_FILTER config SECCOMP_FILTER
def_bool y def_bool y
depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET

View File

@ -226,7 +226,7 @@ nand@0,0 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
elm_id = <&elm>; ti,elm-id = <&elm>;
}; };
}; };

View File

@ -161,7 +161,7 @@ nand@0,0 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
elm_id = <&elm>; ti,elm-id = <&elm>;
/* MTD partition table */ /* MTD partition table */
partition@0 { partition@0 {

View File

@ -197,7 +197,7 @@ nandflash: nand@0,0 {
gpmc,wr-access-ns = <30>; gpmc,wr-access-ns = <30>;
gpmc,wr-data-mux-bus-ns = <0>; gpmc,wr-data-mux-bus-ns = <0>;
elm_id = <&elm>; ti,elm-id = <&elm>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;

View File

@ -390,12 +390,12 @@ switch@0 {
port@0 { port@0 {
reg = <0>; reg = <0>;
label = "lan1"; label = "lan5";
}; };
port@1 { port@1 {
reg = <1>; reg = <1>;
label = "lan2"; label = "lan4";
}; };
port@2 { port@2 {
@ -405,12 +405,12 @@ port@2 {
port@3 { port@3 {
reg = <3>; reg = <3>;
label = "lan4"; label = "lan2";
}; };
port@4 { port@4 {
reg = <4>; reg = <4>;
label = "lan5"; label = "lan1";
}; };
port@5 { port@5 {

View File

@ -447,14 +447,11 @@ &mmc_0 {
samsung,dw-mshc-ciu-div = <3>; samsung,dw-mshc-ciu-div = <3>;
samsung,dw-mshc-sdr-timing = <0 4>; samsung,dw-mshc-sdr-timing = <0 4>;
samsung,dw-mshc-ddr-timing = <0 2>; samsung,dw-mshc-ddr-timing = <0 2>;
samsung,dw-mshc-hs400-timing = <0 2>;
samsung,read-strobe-delay = <90>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus1 &sd0_bus4 &sd0_bus8 &sd0_cd>; pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus1 &sd0_bus4 &sd0_bus8 &sd0_cd>;
bus-width = <8>; bus-width = <8>;
cap-mmc-highspeed; cap-mmc-highspeed;
mmc-hs200-1_8v; mmc-hs200-1_8v;
mmc-hs400-1_8v;
vmmc-supply = <&ldo20_reg>; vmmc-supply = <&ldo20_reg>;
vqmmc-supply = <&ldo11_reg>; vqmmc-supply = <&ldo11_reg>;
}; };

View File

@ -243,7 +243,7 @@ spdif: spdif@02004000 {
clocks = <&clks IMX6QDL_CLK_SPDIF_GCLK>, <&clks IMX6QDL_CLK_OSC>, clocks = <&clks IMX6QDL_CLK_SPDIF_GCLK>, <&clks IMX6QDL_CLK_OSC>,
<&clks IMX6QDL_CLK_SPDIF>, <&clks IMX6QDL_CLK_ASRC>, <&clks IMX6QDL_CLK_SPDIF>, <&clks IMX6QDL_CLK_ASRC>,
<&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_ESAI_EXTAL>, <&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_ESAI_EXTAL>,
<&clks IMX6QDL_CLK_IPG>, <&clks IMX6QDL_CLK_MLB>, <&clks IMX6QDL_CLK_IPG>, <&clks IMX6QDL_CLK_DUMMY>,
<&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_SPBA>; <&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_SPBA>;
clock-names = "core", "rxtx0", clock-names = "core", "rxtx0",
"rxtx1", "rxtx2", "rxtx1", "rxtx2",

View File

@ -64,7 +64,7 @@ &usdhc4 {
cd-gpios = <&gpio7 11 GPIO_ACTIVE_LOW>; cd-gpios = <&gpio7 11 GPIO_ACTIVE_LOW>;
no-1-8-v; no-1-8-v;
keep-power-in-suspend; keep-power-in-suspend;
enable-sdio-wakup; wakeup-source;
status = "okay"; status = "okay";
}; };

View File

@ -131,7 +131,7 @@ tsc2046@0 {
ti,y-min = /bits/ 16 <0>; ti,y-min = /bits/ 16 <0>;
ti,y-max = /bits/ 16 <0>; ti,y-max = /bits/ 16 <0>;
ti,pressure-max = /bits/ 16 <0>; ti,pressure-max = /bits/ 16 <0>;
ti,x-plat-ohms = /bits/ 16 <400>; ti,x-plate-ohms = /bits/ 16 <400>;
wakeup-source; wakeup-source;
}; };
}; };

View File

@ -113,7 +113,7 @@ partition@0 {
partition@e0000 { partition@e0000 {
label = "u-boot environment"; label = "u-boot environment";
reg = <0xe0000 0x100000>; reg = <0xe0000 0x20000>;
}; };
partition@100000 { partition@100000 {

View File

@ -116,6 +116,10 @@ partition@600000 {
}; };
}; };
&pciec {
status = "okay";
};
&pcie0 { &pcie0 {
status = "okay"; status = "okay";
}; };

View File

@ -35,10 +35,15 @@ &gpmc {
ranges = <0 0 0x00000000 0x1000000>; /* CS0: 16MB for NAND */ ranges = <0 0 0x00000000 0x1000000>; /* CS0: 16MB for NAND */
nand@0,0 { nand@0,0 {
linux,mtd-name = "micron,mt29f4g16abbda3w"; compatible = "ti,omap2-nand";
reg = <0 0 4>; /* CS0, offset 0, IO size 4 */ reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
linux,mtd-name = "micron,mt29f4g16abbda3w";
nand-bus-width = <16>; nand-bus-width = <16>;
ti,nand-ecc-opt = "bch8"; ti,nand-ecc-opt = "bch8";
rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
gpmc,sync-clk-ps = <0>; gpmc,sync-clk-ps = <0>;
gpmc,cs-on-ns = <0>; gpmc,cs-on-ns = <0>;
gpmc,cs-rd-off-ns = <44>; gpmc,cs-rd-off-ns = <44>;
@ -54,10 +59,6 @@ nand@0,0 {
gpmc,wr-access-ns = <40>; gpmc,wr-access-ns = <40>;
gpmc,wr-data-mux-bus-ns = <0>; gpmc,wr-data-mux-bus-ns = <0>;
gpmc,device-width = <2>; gpmc,device-width = <2>;
gpmc,page-burst-access-ns = <5>;
gpmc,cycle2cycle-delay-ns = <50>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;

View File

@ -46,6 +46,7 @@ nand@0,0 {
linux,mtd-name = "micron,mt29f4g16abbda3w"; linux,mtd-name = "micron,mt29f4g16abbda3w";
nand-bus-width = <16>; nand-bus-width = <16>;
ti,nand-ecc-opt = "bch8"; ti,nand-ecc-opt = "bch8";
rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
gpmc,sync-clk-ps = <0>; gpmc,sync-clk-ps = <0>;
gpmc,cs-on-ns = <0>; gpmc,cs-on-ns = <0>;
gpmc,cs-rd-off-ns = <44>; gpmc,cs-rd-off-ns = <44>;

View File

@ -223,7 +223,9 @@ &mcbsp2 {
}; };
&gpmc { &gpmc {
ranges = <0 0 0x00000000 0x20000000>; ranges = <0 0 0x30000000 0x1000000>, /* CS0 */
<4 0 0x2b000000 0x1000000>, /* CS4 */
<5 0 0x2c000000 0x1000000>; /* CS5 */
nand@0,0 { nand@0,0 {
compatible = "ti,omap2-nand"; compatible = "ti,omap2-nand";

View File

@ -55,8 +55,6 @@ button1@14 {
#include "omap-gpmc-smsc9221.dtsi" #include "omap-gpmc-smsc9221.dtsi"
&gpmc { &gpmc {
ranges = <5 0 0x2c000000 0x1000000>; /* CS5 */
ethernet@gpmc { ethernet@gpmc {
reg = <5 0 0xff>; reg = <5 0 0xff>;
interrupt-parent = <&gpio6>; interrupt-parent = <&gpio6>;

View File

@ -27,8 +27,6 @@ heartbeat {
#include "omap-gpmc-smsc9221.dtsi" #include "omap-gpmc-smsc9221.dtsi"
&gpmc { &gpmc {
ranges = <5 0 0x2c000000 0x1000000>; /* CS5 */
ethernet@gpmc { ethernet@gpmc {
reg = <5 0 0xff>; reg = <5 0 0xff>;
interrupt-parent = <&gpio6>; interrupt-parent = <&gpio6>;

View File

@ -15,9 +15,6 @@
#include "omap-gpmc-smsc9221.dtsi" #include "omap-gpmc-smsc9221.dtsi"
&gpmc { &gpmc {
ranges = <4 0 0x2b000000 0x1000000>, /* CS4 */
<5 0 0x2c000000 0x1000000>; /* CS5 */
smsc1: ethernet@gpmc { smsc1: ethernet@gpmc {
reg = <5 0 0xff>; reg = <5 0 0xff>;
interrupt-parent = <&gpio6>; interrupt-parent = <&gpio6>;

View File

@ -84,7 +84,7 @@ map0 {
trips { trips {
cpu_alert0: cpu_alert0 { cpu_alert0: cpu_alert0 {
/* milliCelsius */ /* milliCelsius */
temperature = <850000>; temperature = <85000>;
hysteresis = <2000>; hysteresis = <2000>;
type = "passive"; type = "passive";
}; };

View File

@ -897,7 +897,7 @@ ldo2 {
palmas: tps65913@58 { palmas: tps65913@58 {
compatible = "ti,palmas"; compatible = "ti,palmas";
reg = <0x58>; reg = <0x58>;
interrupts = <0 86 IRQ_TYPE_LEVEL_LOW>; interrupts = <0 86 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <2>; #interrupt-cells = <2>;
interrupt-controller; interrupt-controller;

View File

@ -802,7 +802,7 @@ regulator@43 {
palmas: pmic@58 { palmas: pmic@58 {
compatible = "ti,palmas"; compatible = "ti,palmas";
reg = <0x58>; reg = <0x58>;
interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_LOW>; interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <2>; #interrupt-cells = <2>;
interrupt-controller; interrupt-controller;

View File

@ -63,7 +63,7 @@ i2c@7000d000 {
palmas: pmic@58 { palmas: pmic@58 {
compatible = "ti,palmas"; compatible = "ti,palmas";
reg = <0x58>; reg = <0x58>;
interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_LOW>; interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <2>; #interrupt-cells = <2>;
interrupt-controller; interrupt-controller;

View File

@ -1382,7 +1382,7 @@ dsi_b {
* Pin 41: BR_UART1_TXD * Pin 41: BR_UART1_TXD
* Pin 44: BR_UART1_RXD * Pin 44: BR_UART1_RXD
*/ */
serial@0,70006000 { serial@70006000 {
compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart"; compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart";
status = "okay"; status = "okay";
}; };
@ -1394,7 +1394,7 @@ serial@0,70006000 {
* Pin 71: UART2_CTS_L * Pin 71: UART2_CTS_L
* Pin 74: UART2_RTS_L * Pin 74: UART2_RTS_L
*/ */
serial@0,70006040 { serial@70006040 {
compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart"; compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart";
status = "okay"; status = "okay";
}; };

View File

@ -142,6 +142,19 @@ ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE
and r7, #0x1f @ Preserve HPMN and r7, #0x1f @ Preserve HPMN
mcr p15, 4, r7, c1, c1, 1 @ HDCR mcr p15, 4, r7, c1, c1, 1 @ HDCR
@ Make sure NS-SVC is initialised appropriately
mrc p15, 0, r7, c1, c0, 0 @ SCTLR
orr r7, #(1 << 5) @ CP15 barriers enabled
bic r7, #(3 << 7) @ Clear SED/ITD for v8 (RES0 for v7)
bic r7, #(3 << 19) @ WXN and UWXN disabled
mcr p15, 0, r7, c1, c0, 0 @ SCTLR
mrc p15, 0, r7, c0, c0, 0 @ MIDR
mcr p15, 4, r7, c0, c0, 0 @ VPIDR
mrc p15, 0, r7, c0, c0, 5 @ MPIDR
mcr p15, 4, r7, c0, c0, 5 @ VMPIDR
#if !defined(ZIMAGE) && defined(CONFIG_ARM_ARCH_TIMER) #if !defined(ZIMAGE) && defined(CONFIG_ARM_ARCH_TIMER)
@ make CNTP_* and CNTPCT accessible from PL1 @ make CNTP_* and CNTPCT accessible from PL1
mrc p15, 0, r7, c0, c1, 1 @ ID_PFR1 mrc p15, 0, r7, c0, c1, 1 @ ID_PFR1

View File

@ -64,6 +64,7 @@ static void __init imx6ul_init_machine(void)
if (parent == NULL) if (parent == NULL)
pr_warn("failed to initialize soc device\n"); pr_warn("failed to initialize soc device\n");
of_platform_default_populate(NULL, NULL, parent);
imx6ul_enet_init(); imx6ul_enet_init();
imx_anatop_init(); imx_anatop_init();
imx6ul_pm_init(); imx6ul_pm_init();

View File

@ -295,7 +295,7 @@ int imx6_set_lpm(enum mxc_cpu_pwr_mode mode)
val &= ~BM_CLPCR_SBYOS; val &= ~BM_CLPCR_SBYOS;
if (cpu_is_imx6sl()) if (cpu_is_imx6sl())
val |= BM_CLPCR_BYPASS_PMIC_READY; val |= BM_CLPCR_BYPASS_PMIC_READY;
if (cpu_is_imx6sl() || cpu_is_imx6sx()) if (cpu_is_imx6sl() || cpu_is_imx6sx() || cpu_is_imx6ul())
val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS; val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS;
else else
val |= BM_CLPCR_BYP_MMDC_CH1_LPM_HS; val |= BM_CLPCR_BYP_MMDC_CH1_LPM_HS;
@ -310,7 +310,7 @@ int imx6_set_lpm(enum mxc_cpu_pwr_mode mode)
val |= 0x3 << BP_CLPCR_STBY_COUNT; val |= 0x3 << BP_CLPCR_STBY_COUNT;
val |= BM_CLPCR_VSTBY; val |= BM_CLPCR_VSTBY;
val |= BM_CLPCR_SBYOS; val |= BM_CLPCR_SBYOS;
if (cpu_is_imx6sl()) if (cpu_is_imx6sl() || cpu_is_imx6sx())
val |= BM_CLPCR_BYPASS_PMIC_READY; val |= BM_CLPCR_BYPASS_PMIC_READY;
if (cpu_is_imx6sl() || cpu_is_imx6sx() || cpu_is_imx6ul()) if (cpu_is_imx6sl() || cpu_is_imx6sx() || cpu_is_imx6ul())
val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS; val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS;

View File

@ -220,9 +220,6 @@ static int am33xx_cm_wait_module_ready(u8 part, s16 inst, u16 clkctrl_offs,
{ {
int i = 0; int i = 0;
if (!clkctrl_offs)
return 0;
omap_test_timeout(_is_module_ready(inst, clkctrl_offs), omap_test_timeout(_is_module_ready(inst, clkctrl_offs),
MAX_MODULE_READY_TIME, i); MAX_MODULE_READY_TIME, i);
@ -246,9 +243,6 @@ static int am33xx_cm_wait_module_idle(u8 part, s16 inst, u16 clkctrl_offs,
{ {
int i = 0; int i = 0;
if (!clkctrl_offs)
return 0;
omap_test_timeout((_clkctrl_idlest(inst, clkctrl_offs) == omap_test_timeout((_clkctrl_idlest(inst, clkctrl_offs) ==
CLKCTRL_IDLEST_DISABLED), CLKCTRL_IDLEST_DISABLED),
MAX_MODULE_READY_TIME, i); MAX_MODULE_READY_TIME, i);

View File

@ -278,9 +278,6 @@ static int omap4_cminst_wait_module_ready(u8 part, s16 inst, u16 clkctrl_offs,
{ {
int i = 0; int i = 0;
if (!clkctrl_offs)
return 0;
omap_test_timeout(_is_module_ready(part, inst, clkctrl_offs), omap_test_timeout(_is_module_ready(part, inst, clkctrl_offs),
MAX_MODULE_READY_TIME, i); MAX_MODULE_READY_TIME, i);
@ -304,9 +301,6 @@ static int omap4_cminst_wait_module_idle(u8 part, s16 inst, u16 clkctrl_offs,
{ {
int i = 0; int i = 0;
if (!clkctrl_offs)
return 0;
omap_test_timeout((_clkctrl_idlest(part, inst, clkctrl_offs) == omap_test_timeout((_clkctrl_idlest(part, inst, clkctrl_offs) ==
CLKCTRL_IDLEST_DISABLED), CLKCTRL_IDLEST_DISABLED),
MAX_MODULE_DISABLE_TIME, i); MAX_MODULE_DISABLE_TIME, i);

View File

@ -1053,6 +1053,10 @@ static int _omap4_wait_target_disable(struct omap_hwmod *oh)
if (oh->flags & HWMOD_NO_IDLEST) if (oh->flags & HWMOD_NO_IDLEST)
return 0; return 0;
if (!oh->prcm.omap4.clkctrl_offs &&
!(oh->prcm.omap4.flags & HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET))
return 0;
return omap_cm_wait_module_idle(oh->clkdm->prcm_partition, return omap_cm_wait_module_idle(oh->clkdm->prcm_partition,
oh->clkdm->cm_inst, oh->clkdm->cm_inst,
oh->prcm.omap4.clkctrl_offs, 0); oh->prcm.omap4.clkctrl_offs, 0);
@ -2971,6 +2975,10 @@ static int _omap4_wait_target_ready(struct omap_hwmod *oh)
if (!_find_mpu_rt_port(oh)) if (!_find_mpu_rt_port(oh))
return 0; return 0;
if (!oh->prcm.omap4.clkctrl_offs &&
!(oh->prcm.omap4.flags & HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET))
return 0;
/* XXX check module SIDLEMODE, hardreset status */ /* XXX check module SIDLEMODE, hardreset status */
return omap_cm_wait_module_ready(oh->clkdm->prcm_partition, return omap_cm_wait_module_ready(oh->clkdm->prcm_partition,

View File

@ -443,8 +443,12 @@ struct omap_hwmod_omap2_prcm {
* HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT: Some IP blocks don't have a PRCM * HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT: Some IP blocks don't have a PRCM
* module-level context loss register associated with them; this * module-level context loss register associated with them; this
* flag bit should be set in those cases * flag bit should be set in those cases
* HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET: Some IP blocks have a valid CLKCTRL
* offset of zero; this flag bit should be set in those cases to
* distinguish from hwmods that have no clkctrl offset.
*/ */
#define HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT (1 << 0) #define HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT (1 << 0)
#define HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET (1 << 1)
/** /**
* struct omap_hwmod_omap4_prcm - OMAP4-specific PRCM data * struct omap_hwmod_omap4_prcm - OMAP4-specific PRCM data

View File

@ -29,6 +29,7 @@
#define CLKCTRL(oh, clkctrl) ((oh).prcm.omap4.clkctrl_offs = (clkctrl)) #define CLKCTRL(oh, clkctrl) ((oh).prcm.omap4.clkctrl_offs = (clkctrl))
#define RSTCTRL(oh, rstctrl) ((oh).prcm.omap4.rstctrl_offs = (rstctrl)) #define RSTCTRL(oh, rstctrl) ((oh).prcm.omap4.rstctrl_offs = (rstctrl))
#define RSTST(oh, rstst) ((oh).prcm.omap4.rstst_offs = (rstst)) #define RSTST(oh, rstst) ((oh).prcm.omap4.rstst_offs = (rstst))
#define PRCM_FLAGS(oh, flag) ((oh).prcm.omap4.flags = (flag))
/* /*
* 'l3' class * 'l3' class
@ -1296,6 +1297,7 @@ static void omap_hwmod_am33xx_clkctrl(void)
CLKCTRL(am33xx_i2c1_hwmod, AM33XX_CM_WKUP_I2C0_CLKCTRL_OFFSET); CLKCTRL(am33xx_i2c1_hwmod, AM33XX_CM_WKUP_I2C0_CLKCTRL_OFFSET);
CLKCTRL(am33xx_wd_timer1_hwmod, AM33XX_CM_WKUP_WDT1_CLKCTRL_OFFSET); CLKCTRL(am33xx_wd_timer1_hwmod, AM33XX_CM_WKUP_WDT1_CLKCTRL_OFFSET);
CLKCTRL(am33xx_rtc_hwmod, AM33XX_CM_RTC_RTC_CLKCTRL_OFFSET); CLKCTRL(am33xx_rtc_hwmod, AM33XX_CM_RTC_RTC_CLKCTRL_OFFSET);
PRCM_FLAGS(am33xx_rtc_hwmod, HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET);
CLKCTRL(am33xx_mmc2_hwmod, AM33XX_CM_PER_MMC2_CLKCTRL_OFFSET); CLKCTRL(am33xx_mmc2_hwmod, AM33XX_CM_PER_MMC2_CLKCTRL_OFFSET);
CLKCTRL(am33xx_gpmc_hwmod, AM33XX_CM_PER_GPMC_CLKCTRL_OFFSET); CLKCTRL(am33xx_gpmc_hwmod, AM33XX_CM_PER_GPMC_CLKCTRL_OFFSET);
CLKCTRL(am33xx_l4_ls_hwmod, AM33XX_CM_PER_L4LS_CLKCTRL_OFFSET); CLKCTRL(am33xx_l4_ls_hwmod, AM33XX_CM_PER_L4LS_CLKCTRL_OFFSET);

View File

@ -722,8 +722,20 @@ static struct omap_hwmod omap3xxx_dss_dispc_hwmod = {
* display serial interface controller * display serial interface controller
*/ */
static struct omap_hwmod_class_sysconfig omap3xxx_dsi_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_CLOCKACTIVITY |
SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE |
SYSC_HAS_SOFTRESET | SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
};
static struct omap_hwmod_class omap3xxx_dsi_hwmod_class = { static struct omap_hwmod_class omap3xxx_dsi_hwmod_class = {
.name = "dsi", .name = "dsi",
.sysc = &omap3xxx_dsi_sysc,
}; };
static struct omap_hwmod_irq_info omap3xxx_dsi1_irqs[] = { static struct omap_hwmod_irq_info omap3xxx_dsi1_irqs[] = {

View File

@ -125,6 +125,8 @@ static unsigned long clk_36864_get_rate(struct clk *clk)
} }
static struct clkops clk_36864_ops = { static struct clkops clk_36864_ops = {
.enable = clk_cpu_enable,
.disable = clk_cpu_disable,
.get_rate = clk_36864_get_rate, .get_rate = clk_36864_get_rate,
}; };
@ -140,9 +142,8 @@ static struct clk_lookup sa11xx_clkregs[] = {
CLKDEV_INIT(NULL, "OSTIMER0", &clk_36864), CLKDEV_INIT(NULL, "OSTIMER0", &clk_36864),
}; };
static int __init sa11xx_clk_init(void) int __init sa11xx_clk_init(void)
{ {
clkdev_add_table(sa11xx_clkregs, ARRAY_SIZE(sa11xx_clkregs)); clkdev_add_table(sa11xx_clkregs, ARRAY_SIZE(sa11xx_clkregs));
return 0; return 0;
} }
core_initcall(sa11xx_clk_init);

View File

@ -34,6 +34,7 @@
#include <mach/hardware.h> #include <mach/hardware.h>
#include <mach/irqs.h> #include <mach/irqs.h>
#include <mach/reset.h>
#include "generic.h" #include "generic.h"
#include <clocksource/pxa.h> #include <clocksource/pxa.h>
@ -95,6 +96,8 @@ static void sa1100_power_off(void)
void sa11x0_restart(enum reboot_mode mode, const char *cmd) void sa11x0_restart(enum reboot_mode mode, const char *cmd)
{ {
clear_reset_status(RESET_STATUS_ALL);
if (mode == REBOOT_SOFT) { if (mode == REBOOT_SOFT) {
/* Jump into ROM at address 0 */ /* Jump into ROM at address 0 */
soft_restart(0); soft_restart(0);
@ -388,6 +391,7 @@ void __init sa1100_init_irq(void)
sa11x0_init_irq_nodt(IRQ_GPIO0_SC, irq_resource.start); sa11x0_init_irq_nodt(IRQ_GPIO0_SC, irq_resource.start);
sa1100_init_gpio(); sa1100_init_gpio();
sa11xx_clk_init();
} }
/* /*

View File

@ -44,3 +44,5 @@ int sa11x0_pm_init(void);
#else #else
static inline int sa11x0_pm_init(void) { return 0; } static inline int sa11x0_pm_init(void) { return 0; }
#endif #endif
int sa11xx_clk_init(void);

View File

@ -16,6 +16,7 @@
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/memory.h>
#include "proc-macros.S" #include "proc-macros.S"

View File

@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
#define _percpu_read(pcp) \ #define _percpu_read(pcp) \
({ \ ({ \
typeof(pcp) __retval; \ typeof(pcp) __retval; \
preempt_disable(); \ preempt_disable_notrace(); \
__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \
sizeof(pcp)); \ sizeof(pcp)); \
preempt_enable(); \ preempt_enable_notrace(); \
__retval; \ __retval; \
}) })
#define _percpu_write(pcp, val) \ #define _percpu_write(pcp, val) \
do { \ do { \
preempt_disable(); \ preempt_disable_notrace(); \
__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \
sizeof(pcp)); \ sizeof(pcp)); \
preempt_enable(); \ preempt_enable_notrace(); \
} while(0) \ } while(0) \
#define _pcp_protect(operation, pcp, val) \ #define _pcp_protect(operation, pcp, val) \

View File

@ -363,4 +363,14 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
#define arch_read_relax(lock) cpu_relax() #define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax() #define arch_write_relax(lock) cpu_relax()
/*
* Accesses appearing in program order before a spin_lock() operation
* can be reordered with accesses inside the critical section, by virtue
* of arch_spin_lock being constructed using acquire semantics.
*
* In cases where this is problematic (e.g. try_to_wake_up), an
* smp_mb__before_spinlock() can restore the required ordering.
*/
#define smp_mb__before_spinlock() smp_mb()
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */

View File

@ -241,8 +241,7 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
static inline unsigned long static inline unsigned long
__copy_to_user (void __user *to, const void *from, unsigned long count) __copy_to_user (void __user *to, const void *from, unsigned long count)
{ {
if (!__builtin_constant_p(count)) check_object_size(from, count, true);
check_object_size(from, count, true);
return __copy_user(to, (__force void __user *) from, count); return __copy_user(to, (__force void __user *) from, count);
} }
@ -250,8 +249,7 @@ __copy_to_user (void __user *to, const void *from, unsigned long count)
static inline unsigned long static inline unsigned long
__copy_from_user (void *to, const void __user *from, unsigned long count) __copy_from_user (void *to, const void __user *from, unsigned long count)
{ {
if (!__builtin_constant_p(count)) check_object_size(to, count, false);
check_object_size(to, count, false);
return __copy_user((__force void __user *) to, from, count); return __copy_user((__force void __user *) to, from, count);
} }
@ -265,8 +263,7 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
long __cu_len = (n); \ long __cu_len = (n); \
\ \
if (__access_ok(__cu_to, __cu_len, get_fs())) { \ if (__access_ok(__cu_to, __cu_len, get_fs())) { \
if (!__builtin_constant_p(n)) \ check_object_size(__cu_from, __cu_len, true); \
check_object_size(__cu_from, __cu_len, true); \
__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len); \ __cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len); \
} \ } \
__cu_len; \ __cu_len; \
@ -280,8 +277,7 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
\ \
__chk_user_ptr(__cu_from); \ __chk_user_ptr(__cu_from); \
if (__access_ok(__cu_from, __cu_len, get_fs())) { \ if (__access_ok(__cu_from, __cu_len, get_fs())) { \
if (!__builtin_constant_p(n)) \ check_object_size(__cu_to, __cu_len, false); \
check_object_size(__cu_to, __cu_len, false); \
__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len); \ __cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len); \
} \ } \
__cu_len; \ __cu_len; \

View File

@ -311,14 +311,12 @@ static inline unsigned long copy_from_user(void *to,
unsigned long over; unsigned long over;
if (access_ok(VERIFY_READ, from, n)) { if (access_ok(VERIFY_READ, from, n)) {
if (!__builtin_constant_p(n)) check_object_size(to, n, false);
check_object_size(to, n, false);
return __copy_tofrom_user((__force void __user *)to, from, n); return __copy_tofrom_user((__force void __user *)to, from, n);
} }
if ((unsigned long)from < TASK_SIZE) { if ((unsigned long)from < TASK_SIZE) {
over = (unsigned long)from + n - TASK_SIZE; over = (unsigned long)from + n - TASK_SIZE;
if (!__builtin_constant_p(n - over)) check_object_size(to, n - over, false);
check_object_size(to, n - over, false);
return __copy_tofrom_user((__force void __user *)to, from, return __copy_tofrom_user((__force void __user *)to, from,
n - over) + over; n - over) + over;
} }
@ -331,14 +329,12 @@ static inline unsigned long copy_to_user(void __user *to,
unsigned long over; unsigned long over;
if (access_ok(VERIFY_WRITE, to, n)) { if (access_ok(VERIFY_WRITE, to, n)) {
if (!__builtin_constant_p(n)) check_object_size(from, n, true);
check_object_size(from, n, true);
return __copy_tofrom_user(to, (__force void __user *)from, n); return __copy_tofrom_user(to, (__force void __user *)from, n);
} }
if ((unsigned long)to < TASK_SIZE) { if ((unsigned long)to < TASK_SIZE) {
over = (unsigned long)to + n - TASK_SIZE; over = (unsigned long)to + n - TASK_SIZE;
if (!__builtin_constant_p(n)) check_object_size(from, n - over, true);
check_object_size(from, n - over, true);
return __copy_tofrom_user(to, (__force void __user *)from, return __copy_tofrom_user(to, (__force void __user *)from,
n - over) + over; n - over) + over;
} }
@ -383,8 +379,7 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
return 0; return 0;
} }
if (!__builtin_constant_p(n)) check_object_size(to, n, false);
check_object_size(to, n, false);
return __copy_tofrom_user((__force void __user *)to, from, n); return __copy_tofrom_user((__force void __user *)to, from, n);
} }
@ -412,8 +407,8 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
if (ret == 0) if (ret == 0)
return 0; return 0;
} }
if (!__builtin_constant_p(n))
check_object_size(from, n, true); check_object_size(from, n, true);
return __copy_tofrom_user(to, (__force const void __user *)from, n); return __copy_tofrom_user(to, (__force const void __user *)from, n);
} }

View File

@ -127,18 +127,19 @@ _GLOBAL(csum_partial_copy_generic)
stw r7,12(r1) stw r7,12(r1)
stw r8,8(r1) stw r8,8(r1)
rlwinm r0,r4,3,0x8
rlwnm r6,r6,r0,0,31 /* odd destination address: rotate one byte */
cmplwi cr7,r0,0 /* is destination address even ? */
addic r12,r6,0 addic r12,r6,0
addi r6,r4,-4 addi r6,r4,-4
neg r0,r4 neg r0,r4
addi r4,r3,-4 addi r4,r3,-4
andi. r0,r0,CACHELINE_MASK /* # bytes to start of cache line */ andi. r0,r0,CACHELINE_MASK /* # bytes to start of cache line */
crset 4*cr7+eq
beq 58f beq 58f
cmplw 0,r5,r0 /* is this more than total to do? */ cmplw 0,r5,r0 /* is this more than total to do? */
blt 63f /* if not much to do */ blt 63f /* if not much to do */
rlwinm r7,r6,3,0x8
rlwnm r12,r12,r7,0,31 /* odd destination address: rotate one byte */
cmplwi cr7,r7,0 /* is destination address even ? */
andi. r8,r0,3 /* get it word-aligned first */ andi. r8,r0,3 /* get it word-aligned first */
mtctr r8 mtctr r8
beq+ 61f beq+ 61f

View File

@ -113,7 +113,12 @@ BEGIN_FTR_SECTION
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
b slb_finish_load_1T b slb_finish_load_1T
0: 0: /*
* For userspace addresses, make sure this is region 0.
*/
cmpdi r9, 0
bne 8f
/* when using slices, we extract the psize off the slice bitmaps /* when using slices, we extract the psize off the slice bitmaps
* and then we need to get the sllp encoding off the mmu_psize_defs * and then we need to get the sllp encoding off the mmu_psize_defs
* array. * array.

View File

@ -162,11 +162,12 @@ static struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb)
static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe)
{ {
struct pnv_phb *phb = pe->phb; struct pnv_phb *phb = pe->phb;
unsigned int pe_num = pe->pe_number;
WARN_ON(pe->pdev); WARN_ON(pe->pdev);
memset(pe, 0, sizeof(struct pnv_ioda_pe)); memset(pe, 0, sizeof(struct pnv_ioda_pe));
clear_bit(pe->pe_number, phb->ioda.pe_alloc); clear_bit(pe_num, phb->ioda.pe_alloc);
} }
/* The default M64 BAR is shared by all PEs */ /* The default M64 BAR is shared by all PEs */
@ -3402,12 +3403,6 @@ static void pnv_ioda_release_pe(struct pnv_ioda_pe *pe)
struct pnv_phb *phb = pe->phb; struct pnv_phb *phb = pe->phb;
struct pnv_ioda_pe *slave, *tmp; struct pnv_ioda_pe *slave, *tmp;
/* Release slave PEs in compound PE */
if (pe->flags & PNV_IODA_PE_MASTER) {
list_for_each_entry_safe(slave, tmp, &pe->slaves, list)
pnv_ioda_release_pe(slave);
}
list_del(&pe->list); list_del(&pe->list);
switch (phb->type) { switch (phb->type) {
case PNV_PHB_IODA1: case PNV_PHB_IODA1:
@ -3422,6 +3417,15 @@ static void pnv_ioda_release_pe(struct pnv_ioda_pe *pe)
pnv_ioda_release_pe_seg(pe); pnv_ioda_release_pe_seg(pe);
pnv_ioda_deconfigure_pe(pe->phb, pe); pnv_ioda_deconfigure_pe(pe->phb, pe);
/* Release slave PEs in the compound PE */
if (pe->flags & PNV_IODA_PE_MASTER) {
list_for_each_entry_safe(slave, tmp, &pe->slaves, list) {
list_del(&slave->list);
pnv_ioda_free_pe(slave);
}
}
pnv_ioda_free_pe(pe); pnv_ioda_free_pe(pe);
} }

View File

@ -41,7 +41,6 @@
#include <linux/root_dev.h> #include <linux/root_dev.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include <linux/kexec.h>
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/processor.h> #include <asm/processor.h>
@ -66,6 +65,7 @@
#include <asm/eeh.h> #include <asm/eeh.h>
#include <asm/reg.h> #include <asm/reg.h>
#include <asm/plpar_wrappers.h> #include <asm/plpar_wrappers.h>
#include <asm/kexec.h>
#include "pseries.h" #include "pseries.h"

View File

@ -23,10 +23,10 @@
static void icp_opal_teardown_cpu(void) static void icp_opal_teardown_cpu(void)
{ {
int cpu = smp_processor_id(); int hw_cpu = hard_smp_processor_id();
/* Clear any pending IPI */ /* Clear any pending IPI */
opal_int_set_mfrr(cpu, 0xff); opal_int_set_mfrr(hw_cpu, 0xff);
} }
static void icp_opal_flush_ipi(void) static void icp_opal_flush_ipi(void)
@ -101,14 +101,16 @@ static void icp_opal_eoi(struct irq_data *d)
static void icp_opal_cause_ipi(int cpu, unsigned long data) static void icp_opal_cause_ipi(int cpu, unsigned long data)
{ {
opal_int_set_mfrr(cpu, IPI_PRIORITY); int hw_cpu = get_hard_smp_processor_id(cpu);
opal_int_set_mfrr(hw_cpu, IPI_PRIORITY);
} }
static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id) static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id)
{ {
int cpu = smp_processor_id(); int hw_cpu = hard_smp_processor_id();
opal_int_set_mfrr(cpu, 0xff); opal_int_set_mfrr(hw_cpu, 0xff);
return smp_ipi_demux(); return smp_ipi_demux();
} }

View File

@ -249,8 +249,7 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n) static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
if (n && __access_ok((unsigned long) to, n)) { if (n && __access_ok((unsigned long) to, n)) {
if (!__builtin_constant_p(n)) check_object_size(from, n, true);
check_object_size(from, n, true);
return __copy_user(to, (__force void __user *) from, n); return __copy_user(to, (__force void __user *) from, n);
} else } else
return n; return n;
@ -258,16 +257,14 @@ static inline unsigned long copy_to_user(void __user *to, const void *from, unsi
static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n) static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
if (!__builtin_constant_p(n)) check_object_size(from, n, true);
check_object_size(from, n, true);
return __copy_user(to, (__force void __user *) from, n); return __copy_user(to, (__force void __user *) from, n);
} }
static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n) static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
{ {
if (n && __access_ok((unsigned long) from, n)) { if (n && __access_ok((unsigned long) from, n)) {
if (!__builtin_constant_p(n)) check_object_size(to, n, false);
check_object_size(to, n, false);
return __copy_user((__force void __user *) to, from, n); return __copy_user((__force void __user *) to, from, n);
} else } else
return n; return n;

View File

@ -212,8 +212,7 @@ copy_from_user(void *to, const void __user *from, unsigned long size)
{ {
unsigned long ret; unsigned long ret;
if (!__builtin_constant_p(size)) check_object_size(to, size, false);
check_object_size(to, size, false);
ret = ___copy_from_user(to, from, size); ret = ___copy_from_user(to, from, size);
if (unlikely(ret)) if (unlikely(ret))
@ -233,8 +232,8 @@ copy_to_user(void __user *to, const void *from, unsigned long size)
{ {
unsigned long ret; unsigned long ret;
if (!__builtin_constant_p(size)) check_object_size(from, size, true);
check_object_size(from, size, true);
ret = ___copy_to_user(to, from, size); ret = ___copy_to_user(to, from, size);
if (unlikely(ret)) if (unlikely(ret))
ret = copy_to_user_fixup(to, from, size); ret = copy_to_user_fixup(to, from, size);

View File

@ -21,21 +21,17 @@ void handle_syscall(struct uml_pt_regs *r)
PT_REGS_SET_SYSCALL_RETURN(regs, -ENOSYS); PT_REGS_SET_SYSCALL_RETURN(regs, -ENOSYS);
if (syscall_trace_enter(regs)) if (syscall_trace_enter(regs))
return; goto out;
/* Do the seccomp check after ptrace; failures should be fast. */ /* Do the seccomp check after ptrace; failures should be fast. */
if (secure_computing(NULL) == -1) if (secure_computing(NULL) == -1)
return; goto out;
/* Update the syscall number after orig_ax has potentially been updated
* with ptrace.
*/
UPT_SYSCALL_NR(r) = PT_SYSCALL_NR(r->gp);
syscall = UPT_SYSCALL_NR(r); syscall = UPT_SYSCALL_NR(r);
if (syscall >= 0 && syscall <= __NR_syscall_max) if (syscall >= 0 && syscall <= __NR_syscall_max)
PT_REGS_SET_SYSCALL_RETURN(regs, PT_REGS_SET_SYSCALL_RETURN(regs,
EXECUTE_SYSCALL(syscall, regs)); EXECUTE_SYSCALL(syscall, regs));
out:
syscall_trace_leave(regs); syscall_trace_leave(regs);
} }

View File

@ -705,7 +705,7 @@ static inline void copy_user_overflow(int size, unsigned long count)
WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count); WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count);
} }
static inline unsigned long __must_check static __always_inline unsigned long __must_check
copy_from_user(void *to, const void __user *from, unsigned long n) copy_from_user(void *to, const void __user *from, unsigned long n)
{ {
int sz = __compiletime_object_size(to); int sz = __compiletime_object_size(to);
@ -725,7 +725,7 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
return n; return n;
} }
static inline unsigned long __must_check static __always_inline unsigned long __must_check
copy_to_user(void __user *to, const void *from, unsigned long n) copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
int sz = __compiletime_object_size(from); int sz = __compiletime_object_size(from);

View File

@ -927,9 +927,10 @@ int track_pfn_copy(struct vm_area_struct *vma)
} }
/* /*
* prot is passed in as a parameter for the new mapping. If the vma has a * prot is passed in as a parameter for the new mapping. If the vma has
* linear pfn mapping for the entire range reserve the entire vma range with * a linear pfn mapping for the entire range, or no vma is provided,
* single reserve_pfn_range call. * reserve the entire pfn + size range with single reserve_pfn_range
* call.
*/ */
int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
unsigned long pfn, unsigned long addr, unsigned long size) unsigned long pfn, unsigned long addr, unsigned long size)
@ -938,11 +939,12 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
enum page_cache_mode pcm; enum page_cache_mode pcm;
/* reserve the whole chunk starting from paddr */ /* reserve the whole chunk starting from paddr */
if (addr == vma->vm_start && size == (vma->vm_end - vma->vm_start)) { if (!vma || (addr == vma->vm_start
&& size == (vma->vm_end - vma->vm_start))) {
int ret; int ret;
ret = reserve_pfn_range(paddr, size, prot, 0); ret = reserve_pfn_range(paddr, size, prot, 0);
if (!ret) if (ret == 0 && vma)
vma->vm_flags |= VM_PAT; vma->vm_flags |= VM_PAT;
return ret; return ret;
} }
@ -997,7 +999,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
resource_size_t paddr; resource_size_t paddr;
unsigned long prot; unsigned long prot;
if (!(vma->vm_flags & VM_PAT)) if (vma && !(vma->vm_flags & VM_PAT))
return; return;
/* free the chunk starting from pfn or the whole chunk */ /* free the chunk starting from pfn or the whole chunk */
@ -1011,7 +1013,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
size = vma->vm_end - vma->vm_start; size = vma->vm_end - vma->vm_start;
} }
free_pfn_range(paddr, size); free_pfn_range(paddr, size);
vma->vm_flags &= ~VM_PAT; if (vma)
vma->vm_flags &= ~VM_PAT;
} }
/* /*

View File

@ -84,7 +84,10 @@ int putreg(struct task_struct *child, int regno, unsigned long value)
case EAX: case EAX:
case EIP: case EIP:
case UESP: case UESP:
break;
case ORIG_EAX: case ORIG_EAX:
/* Update the syscall number. */
UPT_SYSCALL_NR(&child->thread.regs.regs) = value;
break; break;
case FS: case FS:
if (value && (value & 3) != 3) if (value && (value & 3) != 3)

View File

@ -78,7 +78,11 @@ int putreg(struct task_struct *child, int regno, unsigned long value)
case RSI: case RSI:
case RDI: case RDI:
case RBP: case RBP:
break;
case ORIG_RAX: case ORIG_RAX:
/* Update the syscall number. */
UPT_SYSCALL_NR(&child->thread.regs.regs) = value;
break; break;
case FS: case FS:

View File

@ -733,13 +733,14 @@ static void cryptd_aead_crypt(struct aead_request *req,
rctx = aead_request_ctx(req); rctx = aead_request_ctx(req);
compl = rctx->complete; compl = rctx->complete;
tfm = crypto_aead_reqtfm(req);
if (unlikely(err == -EINPROGRESS)) if (unlikely(err == -EINPROGRESS))
goto out; goto out;
aead_request_set_tfm(req, child); aead_request_set_tfm(req, child);
err = crypt( req ); err = crypt( req );
out: out:
tfm = crypto_aead_reqtfm(req);
ctx = crypto_aead_ctx(tfm); ctx = crypto_aead_ctx(tfm);
refcnt = atomic_read(&ctx->refcnt); refcnt = atomic_read(&ctx->refcnt);

View File

@ -42,7 +42,7 @@ static int nfit_handle_mce(struct notifier_block *nb, unsigned long val,
list_for_each_entry(nfit_spa, &acpi_desc->spas, list) { list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
struct acpi_nfit_system_address *spa = nfit_spa->spa; struct acpi_nfit_system_address *spa = nfit_spa->spa;
if (nfit_spa_type(spa) == NFIT_SPA_PM) if (nfit_spa_type(spa) != NFIT_SPA_PM)
continue; continue;
/* find the spa that covers the mce addr */ /* find the spa that covers the mce addr */
if (spa->address > mce->addr) if (spa->address > mce->addr)

View File

@ -404,6 +404,7 @@ static int regcache_rbtree_write(struct regmap *map, unsigned int reg,
unsigned int new_base_reg, new_top_reg; unsigned int new_base_reg, new_top_reg;
unsigned int min, max; unsigned int min, max;
unsigned int max_dist; unsigned int max_dist;
unsigned int dist, best_dist = UINT_MAX;
max_dist = map->reg_stride * sizeof(*rbnode_tmp) / max_dist = map->reg_stride * sizeof(*rbnode_tmp) /
map->cache_word_size; map->cache_word_size;
@ -423,24 +424,41 @@ static int regcache_rbtree_write(struct regmap *map, unsigned int reg,
&base_reg, &top_reg); &base_reg, &top_reg);
if (base_reg <= max && top_reg >= min) { if (base_reg <= max && top_reg >= min) {
new_base_reg = min(reg, base_reg); if (reg < base_reg)
new_top_reg = max(reg, top_reg); dist = base_reg - reg;
} else { else if (reg > top_reg)
if (max < base_reg) dist = reg - top_reg;
node = node->rb_left;
else else
node = node->rb_right; dist = 0;
if (dist < best_dist) {
continue; rbnode = rbnode_tmp;
best_dist = dist;
new_base_reg = min(reg, base_reg);
new_top_reg = max(reg, top_reg);
}
} }
ret = regcache_rbtree_insert_to_block(map, rbnode_tmp, /*
* Keep looking, we want to choose the closest block,
* otherwise we might end up creating overlapping
* blocks, which breaks the rbtree.
*/
if (reg < base_reg)
node = node->rb_left;
else if (reg > top_reg)
node = node->rb_right;
else
break;
}
if (rbnode) {
ret = regcache_rbtree_insert_to_block(map, rbnode,
new_base_reg, new_base_reg,
new_top_reg, reg, new_top_reg, reg,
value); value);
if (ret) if (ret)
return ret; return ret;
rbtree_ctx->cached_rbnode = rbnode_tmp; rbtree_ctx->cached_rbnode = rbnode;
return 0; return 0;
} }

View File

@ -38,10 +38,11 @@ static int regcache_hw_init(struct regmap *map)
/* calculate the size of reg_defaults */ /* calculate the size of reg_defaults */
for (count = 0, i = 0; i < map->num_reg_defaults_raw; i++) for (count = 0, i = 0; i < map->num_reg_defaults_raw; i++)
if (!regmap_volatile(map, i * map->reg_stride)) if (regmap_readable(map, i * map->reg_stride) &&
!regmap_volatile(map, i * map->reg_stride))
count++; count++;
/* all registers are volatile, so just bypass */ /* all registers are unreadable or volatile, so just bypass */
if (!count) { if (!count) {
map->cache_bypass = true; map->cache_bypass = true;
return 0; return 0;

View File

@ -1474,6 +1474,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
ret = map->bus->write(map->bus_context, buf, len); ret = map->bus->write(map->bus_context, buf, len);
kfree(buf); kfree(buf);
} else if (ret != 0 && !map->cache_bypass && map->format.parse_val) {
regcache_drop_region(map, reg, reg + 1);
} }
trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes); trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes);

View File

@ -551,7 +551,7 @@ static struct attribute *cci5xx_pmu_event_attrs[] = {
CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB),
CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC),
CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD),
CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snopp_rq_stall_tt_full, 0xE), CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE),
CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF),
NULL NULL
}; };

View File

@ -187,6 +187,7 @@ struct arm_ccn {
struct arm_ccn_component *xp; struct arm_ccn_component *xp;
struct arm_ccn_dt dt; struct arm_ccn_dt dt;
int mn_id;
}; };
static DEFINE_MUTEX(arm_ccn_mutex); static DEFINE_MUTEX(arm_ccn_mutex);
@ -212,6 +213,7 @@ static int arm_ccn_node_to_xp_port(int node)
#define CCN_CONFIG_TYPE(_config) (((_config) >> 8) & 0xff) #define CCN_CONFIG_TYPE(_config) (((_config) >> 8) & 0xff)
#define CCN_CONFIG_EVENT(_config) (((_config) >> 16) & 0xff) #define CCN_CONFIG_EVENT(_config) (((_config) >> 16) & 0xff)
#define CCN_CONFIG_PORT(_config) (((_config) >> 24) & 0x3) #define CCN_CONFIG_PORT(_config) (((_config) >> 24) & 0x3)
#define CCN_CONFIG_BUS(_config) (((_config) >> 24) & 0x3)
#define CCN_CONFIG_VC(_config) (((_config) >> 26) & 0x7) #define CCN_CONFIG_VC(_config) (((_config) >> 26) & 0x7)
#define CCN_CONFIG_DIR(_config) (((_config) >> 29) & 0x1) #define CCN_CONFIG_DIR(_config) (((_config) >> 29) & 0x1)
#define CCN_CONFIG_MASK(_config) (((_config) >> 30) & 0xf) #define CCN_CONFIG_MASK(_config) (((_config) >> 30) & 0xf)
@ -241,6 +243,7 @@ static CCN_FORMAT_ATTR(xp, "config:0-7");
static CCN_FORMAT_ATTR(type, "config:8-15"); static CCN_FORMAT_ATTR(type, "config:8-15");
static CCN_FORMAT_ATTR(event, "config:16-23"); static CCN_FORMAT_ATTR(event, "config:16-23");
static CCN_FORMAT_ATTR(port, "config:24-25"); static CCN_FORMAT_ATTR(port, "config:24-25");
static CCN_FORMAT_ATTR(bus, "config:24-25");
static CCN_FORMAT_ATTR(vc, "config:26-28"); static CCN_FORMAT_ATTR(vc, "config:26-28");
static CCN_FORMAT_ATTR(dir, "config:29-29"); static CCN_FORMAT_ATTR(dir, "config:29-29");
static CCN_FORMAT_ATTR(mask, "config:30-33"); static CCN_FORMAT_ATTR(mask, "config:30-33");
@ -253,6 +256,7 @@ static struct attribute *arm_ccn_pmu_format_attrs[] = {
&arm_ccn_pmu_format_attr_type.attr.attr, &arm_ccn_pmu_format_attr_type.attr.attr,
&arm_ccn_pmu_format_attr_event.attr.attr, &arm_ccn_pmu_format_attr_event.attr.attr,
&arm_ccn_pmu_format_attr_port.attr.attr, &arm_ccn_pmu_format_attr_port.attr.attr,
&arm_ccn_pmu_format_attr_bus.attr.attr,
&arm_ccn_pmu_format_attr_vc.attr.attr, &arm_ccn_pmu_format_attr_vc.attr.attr,
&arm_ccn_pmu_format_attr_dir.attr.attr, &arm_ccn_pmu_format_attr_dir.attr.attr,
&arm_ccn_pmu_format_attr_mask.attr.attr, &arm_ccn_pmu_format_attr_mask.attr.attr,
@ -328,6 +332,7 @@ struct arm_ccn_pmu_event {
static ssize_t arm_ccn_pmu_event_show(struct device *dev, static ssize_t arm_ccn_pmu_event_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev));
struct arm_ccn_pmu_event *event = container_of(attr, struct arm_ccn_pmu_event *event = container_of(attr,
struct arm_ccn_pmu_event, attr); struct arm_ccn_pmu_event, attr);
ssize_t res; ssize_t res;
@ -349,10 +354,17 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev,
break; break;
case CCN_TYPE_XP: case CCN_TYPE_XP:
res += snprintf(buf + res, PAGE_SIZE - res, res += snprintf(buf + res, PAGE_SIZE - res,
",xp=?,port=?,vc=?,dir=?"); ",xp=?,vc=?");
if (event->event == CCN_EVENT_WATCHPOINT) if (event->event == CCN_EVENT_WATCHPOINT)
res += snprintf(buf + res, PAGE_SIZE - res, res += snprintf(buf + res, PAGE_SIZE - res,
",cmp_l=?,cmp_h=?,mask=?"); ",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?");
else
res += snprintf(buf + res, PAGE_SIZE - res,
",bus=?");
break;
case CCN_TYPE_MN:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id);
break; break;
default: default:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=?"); res += snprintf(buf + res, PAGE_SIZE - res, ",node=?");
@ -383,9 +395,9 @@ static umode_t arm_ccn_pmu_events_is_visible(struct kobject *kobj,
} }
static struct arm_ccn_pmu_event arm_ccn_pmu_events[] = { static struct arm_ccn_pmu_event arm_ccn_pmu_events[] = {
CCN_EVENT_MN(eobarrier, "dir=0,vc=0,cmp_h=0x1c00", CCN_IDX_MASK_OPCODE), CCN_EVENT_MN(eobarrier, "dir=1,vc=0,cmp_h=0x1c00", CCN_IDX_MASK_OPCODE),
CCN_EVENT_MN(ecbarrier, "dir=0,vc=0,cmp_h=0x1e00", CCN_IDX_MASK_OPCODE), CCN_EVENT_MN(ecbarrier, "dir=1,vc=0,cmp_h=0x1e00", CCN_IDX_MASK_OPCODE),
CCN_EVENT_MN(dvmop, "dir=0,vc=0,cmp_h=0x2800", CCN_IDX_MASK_OPCODE), CCN_EVENT_MN(dvmop, "dir=1,vc=0,cmp_h=0x2800", CCN_IDX_MASK_OPCODE),
CCN_EVENT_HNI(txdatflits, "dir=1,vc=3", CCN_IDX_MASK_ANY), CCN_EVENT_HNI(txdatflits, "dir=1,vc=3", CCN_IDX_MASK_ANY),
CCN_EVENT_HNI(rxdatflits, "dir=0,vc=3", CCN_IDX_MASK_ANY), CCN_EVENT_HNI(rxdatflits, "dir=0,vc=3", CCN_IDX_MASK_ANY),
CCN_EVENT_HNI(txreqflits, "dir=1,vc=0", CCN_IDX_MASK_ANY), CCN_EVENT_HNI(txreqflits, "dir=1,vc=0", CCN_IDX_MASK_ANY),
@ -733,9 +745,10 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
if (has_branch_stack(event) || event->attr.exclude_user || if (has_branch_stack(event) || event->attr.exclude_user ||
event->attr.exclude_kernel || event->attr.exclude_hv || event->attr.exclude_kernel || event->attr.exclude_hv ||
event->attr.exclude_idle) { event->attr.exclude_idle || event->attr.exclude_host ||
event->attr.exclude_guest) {
dev_warn(ccn->dev, "Can't exclude execution levels!\n"); dev_warn(ccn->dev, "Can't exclude execution levels!\n");
return -EOPNOTSUPP; return -EINVAL;
} }
if (event->cpu < 0) { if (event->cpu < 0) {
@ -759,6 +772,12 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
/* Validate node/xp vs topology */ /* Validate node/xp vs topology */
switch (type) { switch (type) {
case CCN_TYPE_MN:
if (node_xp != ccn->mn_id) {
dev_warn(ccn->dev, "Invalid MN ID %d!\n", node_xp);
return -EINVAL;
}
break;
case CCN_TYPE_XP: case CCN_TYPE_XP:
if (node_xp >= ccn->num_xps) { if (node_xp >= ccn->num_xps) {
dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp); dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp);
@ -886,6 +905,10 @@ static void arm_ccn_pmu_xp_dt_config(struct perf_event *event, int enable)
struct arm_ccn_component *xp; struct arm_ccn_component *xp;
u32 val, dt_cfg; u32 val, dt_cfg;
/* Nothing to do for cycle counter */
if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER)
return;
if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP) if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP)
xp = &ccn->xp[CCN_CONFIG_XP(event->attr.config)]; xp = &ccn->xp[CCN_CONFIG_XP(event->attr.config)];
else else
@ -917,38 +940,17 @@ static void arm_ccn_pmu_event_start(struct perf_event *event, int flags)
arm_ccn_pmu_read_counter(ccn, hw->idx)); arm_ccn_pmu_read_counter(ccn, hw->idx));
hw->state = 0; hw->state = 0;
/*
* Pin the timer, so that the overflows are handled by the chosen
* event->cpu (this is the same one as presented in "cpumask"
* attribute).
*/
if (!ccn->irq)
hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(),
HRTIMER_MODE_REL_PINNED);
/* Set the DT bus input, engaging the counter */ /* Set the DT bus input, engaging the counter */
arm_ccn_pmu_xp_dt_config(event, 1); arm_ccn_pmu_xp_dt_config(event, 1);
} }
static void arm_ccn_pmu_event_stop(struct perf_event *event, int flags) static void arm_ccn_pmu_event_stop(struct perf_event *event, int flags)
{ {
struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu);
struct hw_perf_event *hw = &event->hw; struct hw_perf_event *hw = &event->hw;
u64 timeout;
/* Disable counting, setting the DT bus to pass-through mode */ /* Disable counting, setting the DT bus to pass-through mode */
arm_ccn_pmu_xp_dt_config(event, 0); arm_ccn_pmu_xp_dt_config(event, 0);
if (!ccn->irq)
hrtimer_cancel(&ccn->dt.hrtimer);
/* Let the DT bus drain */
timeout = arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) +
ccn->num_xps;
while (arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) <
timeout)
cpu_relax();
if (flags & PERF_EF_UPDATE) if (flags & PERF_EF_UPDATE)
arm_ccn_pmu_event_update(event); arm_ccn_pmu_event_update(event);
@ -988,7 +990,7 @@ static void arm_ccn_pmu_xp_watchpoint_config(struct perf_event *event)
/* Comparison values */ /* Comparison values */
writel(cmp_l & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_L(wp)); writel(cmp_l & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_L(wp));
writel((cmp_l >> 32) & 0xefffffff, writel((cmp_l >> 32) & 0x7fffffff,
source->base + CCN_XP_DT_CMP_VAL_L(wp) + 4); source->base + CCN_XP_DT_CMP_VAL_L(wp) + 4);
writel(cmp_h & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_H(wp)); writel(cmp_h & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_H(wp));
writel((cmp_h >> 32) & 0x0fffffff, writel((cmp_h >> 32) & 0x0fffffff,
@ -996,7 +998,7 @@ static void arm_ccn_pmu_xp_watchpoint_config(struct perf_event *event)
/* Mask */ /* Mask */
writel(mask_l & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_L(wp)); writel(mask_l & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_L(wp));
writel((mask_l >> 32) & 0xefffffff, writel((mask_l >> 32) & 0x7fffffff,
source->base + CCN_XP_DT_CMP_MASK_L(wp) + 4); source->base + CCN_XP_DT_CMP_MASK_L(wp) + 4);
writel(mask_h & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_H(wp)); writel(mask_h & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_H(wp));
writel((mask_h >> 32) & 0x0fffffff, writel((mask_h >> 32) & 0x0fffffff,
@ -1014,7 +1016,7 @@ static void arm_ccn_pmu_xp_event_config(struct perf_event *event)
hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__XP_PMU_EVENT(hw->config_base); hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__XP_PMU_EVENT(hw->config_base);
id = (CCN_CONFIG_VC(event->attr.config) << 4) | id = (CCN_CONFIG_VC(event->attr.config) << 4) |
(CCN_CONFIG_PORT(event->attr.config) << 3) | (CCN_CONFIG_BUS(event->attr.config) << 3) |
(CCN_CONFIG_EVENT(event->attr.config) << 0); (CCN_CONFIG_EVENT(event->attr.config) << 0);
val = readl(source->base + CCN_XP_PMU_EVENT_SEL); val = readl(source->base + CCN_XP_PMU_EVENT_SEL);
@ -1099,15 +1101,31 @@ static void arm_ccn_pmu_event_config(struct perf_event *event)
spin_unlock(&ccn->dt.config_lock); spin_unlock(&ccn->dt.config_lock);
} }
static int arm_ccn_pmu_active_counters(struct arm_ccn *ccn)
{
return bitmap_weight(ccn->dt.pmu_counters_mask,
CCN_NUM_PMU_EVENT_COUNTERS + 1);
}
static int arm_ccn_pmu_event_add(struct perf_event *event, int flags) static int arm_ccn_pmu_event_add(struct perf_event *event, int flags)
{ {
int err; int err;
struct hw_perf_event *hw = &event->hw; struct hw_perf_event *hw = &event->hw;
struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu);
err = arm_ccn_pmu_event_alloc(event); err = arm_ccn_pmu_event_alloc(event);
if (err) if (err)
return err; return err;
/*
* Pin the timer, so that the overflows are handled by the chosen
* event->cpu (this is the same one as presented in "cpumask"
* attribute).
*/
if (!ccn->irq && arm_ccn_pmu_active_counters(ccn) == 1)
hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(),
HRTIMER_MODE_REL_PINNED);
arm_ccn_pmu_event_config(event); arm_ccn_pmu_event_config(event);
hw->state = PERF_HES_STOPPED; hw->state = PERF_HES_STOPPED;
@ -1120,9 +1138,14 @@ static int arm_ccn_pmu_event_add(struct perf_event *event, int flags)
static void arm_ccn_pmu_event_del(struct perf_event *event, int flags) static void arm_ccn_pmu_event_del(struct perf_event *event, int flags)
{ {
struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu);
arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE); arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE);
arm_ccn_pmu_event_release(event); arm_ccn_pmu_event_release(event);
if (!ccn->irq && arm_ccn_pmu_active_counters(ccn) == 0)
hrtimer_cancel(&ccn->dt.hrtimer);
} }
static void arm_ccn_pmu_event_read(struct perf_event *event) static void arm_ccn_pmu_event_read(struct perf_event *event)
@ -1130,6 +1153,24 @@ static void arm_ccn_pmu_event_read(struct perf_event *event)
arm_ccn_pmu_event_update(event); arm_ccn_pmu_event_update(event);
} }
static void arm_ccn_pmu_enable(struct pmu *pmu)
{
struct arm_ccn *ccn = pmu_to_arm_ccn(pmu);
u32 val = readl(ccn->dt.base + CCN_DT_PMCR);
val |= CCN_DT_PMCR__PMU_EN;
writel(val, ccn->dt.base + CCN_DT_PMCR);
}
static void arm_ccn_pmu_disable(struct pmu *pmu)
{
struct arm_ccn *ccn = pmu_to_arm_ccn(pmu);
u32 val = readl(ccn->dt.base + CCN_DT_PMCR);
val &= ~CCN_DT_PMCR__PMU_EN;
writel(val, ccn->dt.base + CCN_DT_PMCR);
}
static irqreturn_t arm_ccn_pmu_overflow_handler(struct arm_ccn_dt *dt) static irqreturn_t arm_ccn_pmu_overflow_handler(struct arm_ccn_dt *dt)
{ {
u32 pmovsr = readl(dt->base + CCN_DT_PMOVSR); u32 pmovsr = readl(dt->base + CCN_DT_PMOVSR);
@ -1252,6 +1293,8 @@ static int arm_ccn_pmu_init(struct arm_ccn *ccn)
.start = arm_ccn_pmu_event_start, .start = arm_ccn_pmu_event_start,
.stop = arm_ccn_pmu_event_stop, .stop = arm_ccn_pmu_event_stop,
.read = arm_ccn_pmu_event_read, .read = arm_ccn_pmu_event_read,
.pmu_enable = arm_ccn_pmu_enable,
.pmu_disable = arm_ccn_pmu_disable,
}; };
/* No overflow interrupt? Have to use a timer instead. */ /* No overflow interrupt? Have to use a timer instead. */
@ -1361,6 +1404,8 @@ static int arm_ccn_init_nodes(struct arm_ccn *ccn, int region,
switch (type) { switch (type) {
case CCN_TYPE_MN: case CCN_TYPE_MN:
ccn->mn_id = id;
return 0;
case CCN_TYPE_DT: case CCN_TYPE_DT:
return 0; return 0;
case CCN_TYPE_XP: case CCN_TYPE_XP:
@ -1471,8 +1516,9 @@ static int arm_ccn_probe(struct platform_device *pdev)
/* Can set 'disable' bits, so can acknowledge interrupts */ /* Can set 'disable' bits, so can acknowledge interrupts */
writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE, writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE,
ccn->base + CCN_MN_ERRINT_STATUS); ccn->base + CCN_MN_ERRINT_STATUS);
err = devm_request_irq(ccn->dev, irq, arm_ccn_irq_handler, 0, err = devm_request_irq(ccn->dev, irq, arm_ccn_irq_handler,
dev_name(ccn->dev), ccn); IRQF_NOBALANCING | IRQF_NO_THREAD,
dev_name(ccn->dev), ccn);
if (err) if (err)
return err; return err;

View File

@ -178,6 +178,7 @@ static int vexpress_config_populate(struct device_node *node)
parent = class_find_device(vexpress_config_class, NULL, bridge, parent = class_find_device(vexpress_config_class, NULL, bridge,
vexpress_config_node_match); vexpress_config_node_match);
of_node_put(bridge);
if (WARN_ON(!parent)) if (WARN_ON(!parent))
return -ENODEV; return -ENODEV;

View File

@ -165,6 +165,12 @@ struct ports_device {
*/ */
struct virtqueue *c_ivq, *c_ovq; struct virtqueue *c_ivq, *c_ovq;
/*
* A control packet buffer for guest->host requests, protected
* by c_ovq_lock.
*/
struct virtio_console_control cpkt;
/* Array of per-port IO virtqueues */ /* Array of per-port IO virtqueues */
struct virtqueue **in_vqs, **out_vqs; struct virtqueue **in_vqs, **out_vqs;
@ -560,28 +566,29 @@ static ssize_t __send_control_msg(struct ports_device *portdev, u32 port_id,
unsigned int event, unsigned int value) unsigned int event, unsigned int value)
{ {
struct scatterlist sg[1]; struct scatterlist sg[1];
struct virtio_console_control cpkt;
struct virtqueue *vq; struct virtqueue *vq;
unsigned int len; unsigned int len;
if (!use_multiport(portdev)) if (!use_multiport(portdev))
return 0; return 0;
cpkt.id = cpu_to_virtio32(portdev->vdev, port_id);
cpkt.event = cpu_to_virtio16(portdev->vdev, event);
cpkt.value = cpu_to_virtio16(portdev->vdev, value);
vq = portdev->c_ovq; vq = portdev->c_ovq;
sg_init_one(sg, &cpkt, sizeof(cpkt));
spin_lock(&portdev->c_ovq_lock); spin_lock(&portdev->c_ovq_lock);
if (virtqueue_add_outbuf(vq, sg, 1, &cpkt, GFP_ATOMIC) == 0) {
portdev->cpkt.id = cpu_to_virtio32(portdev->vdev, port_id);
portdev->cpkt.event = cpu_to_virtio16(portdev->vdev, event);
portdev->cpkt.value = cpu_to_virtio16(portdev->vdev, value);
sg_init_one(sg, &portdev->cpkt, sizeof(struct virtio_console_control));
if (virtqueue_add_outbuf(vq, sg, 1, &portdev->cpkt, GFP_ATOMIC) == 0) {
virtqueue_kick(vq); virtqueue_kick(vq);
while (!virtqueue_get_buf(vq, &len) while (!virtqueue_get_buf(vq, &len)
&& !virtqueue_is_broken(vq)) && !virtqueue_is_broken(vq))
cpu_relax(); cpu_relax();
} }
spin_unlock(&portdev->c_ovq_lock); spin_unlock(&portdev->c_ovq_lock);
return 0; return 0;
} }

View File

@ -556,7 +556,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Read and write assoclen bytes */ /* Read and write assoclen bytes */
append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ); append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ); if (alg->caam.geniv)
append_math_add_imm_u32(desc, VARSEQOUTLEN, REG3, IMM, ivsize);
else
append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
/* Skip assoc data */ /* Skip assoc data */
append_seq_fifo_store(desc, 0, FIFOST_TYPE_SKIP | FIFOLDST_VLF); append_seq_fifo_store(desc, 0, FIFOST_TYPE_SKIP | FIFOLDST_VLF);
@ -565,6 +568,14 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG | append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
KEY_VLF); KEY_VLF);
if (alg->caam.geniv) {
append_seq_load(desc, ivsize, LDST_CLASS_1_CCB |
LDST_SRCDST_BYTE_CONTEXT |
(ctx1_iv_off << LDST_OFFSET_SHIFT));
append_move(desc, MOVE_SRC_CLASS1CTX | MOVE_DEST_CLASS2INFIFO |
(ctx1_iv_off << MOVE_OFFSET_SHIFT) | ivsize);
}
/* Load Counter into CONTEXT1 reg */ /* Load Counter into CONTEXT1 reg */
if (is_rfc3686) if (is_rfc3686)
append_load_imm_u32(desc, be32_to_cpu(1), LDST_IMM | append_load_imm_u32(desc, be32_to_cpu(1), LDST_IMM |
@ -2150,7 +2161,7 @@ static void init_authenc_job(struct aead_request *req,
init_aead_job(req, edesc, all_contig, encrypt); init_aead_job(req, edesc, all_contig, encrypt);
if (ivsize && (is_rfc3686 || !(alg->caam.geniv && encrypt))) if (ivsize && ((is_rfc3686 && encrypt) || !alg->caam.geniv))
append_load_as_imm(desc, req->iv, ivsize, append_load_as_imm(desc, req->iv, ivsize,
LDST_CLASS_1_CCB | LDST_CLASS_1_CCB |
LDST_SRCDST_BYTE_CONTEXT | LDST_SRCDST_BYTE_CONTEXT |
@ -2537,20 +2548,6 @@ static int aead_decrypt(struct aead_request *req)
return ret; return ret;
} }
static int aead_givdecrypt(struct aead_request *req)
{
struct crypto_aead *aead = crypto_aead_reqtfm(req);
unsigned int ivsize = crypto_aead_ivsize(aead);
if (req->cryptlen < ivsize)
return -EINVAL;
req->cryptlen -= ivsize;
req->assoclen += ivsize;
return aead_decrypt(req);
}
/* /*
* allocate and map the ablkcipher extended descriptor for ablkcipher * allocate and map the ablkcipher extended descriptor for ablkcipher
*/ */
@ -3210,7 +3207,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = MD5_DIGEST_SIZE, .maxauthsize = MD5_DIGEST_SIZE,
}, },
@ -3256,7 +3253,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = SHA1_DIGEST_SIZE, .maxauthsize = SHA1_DIGEST_SIZE,
}, },
@ -3302,7 +3299,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = SHA224_DIGEST_SIZE, .maxauthsize = SHA224_DIGEST_SIZE,
}, },
@ -3348,7 +3345,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = SHA256_DIGEST_SIZE, .maxauthsize = SHA256_DIGEST_SIZE,
}, },
@ -3394,7 +3391,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = SHA384_DIGEST_SIZE, .maxauthsize = SHA384_DIGEST_SIZE,
}, },
@ -3440,7 +3437,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.maxauthsize = SHA512_DIGEST_SIZE, .maxauthsize = SHA512_DIGEST_SIZE,
}, },
@ -3486,7 +3483,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = MD5_DIGEST_SIZE, .maxauthsize = MD5_DIGEST_SIZE,
}, },
@ -3534,7 +3531,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = SHA1_DIGEST_SIZE, .maxauthsize = SHA1_DIGEST_SIZE,
}, },
@ -3582,7 +3579,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = SHA224_DIGEST_SIZE, .maxauthsize = SHA224_DIGEST_SIZE,
}, },
@ -3630,7 +3627,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = SHA256_DIGEST_SIZE, .maxauthsize = SHA256_DIGEST_SIZE,
}, },
@ -3678,7 +3675,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = SHA384_DIGEST_SIZE, .maxauthsize = SHA384_DIGEST_SIZE,
}, },
@ -3726,7 +3723,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES3_EDE_BLOCK_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE,
.maxauthsize = SHA512_DIGEST_SIZE, .maxauthsize = SHA512_DIGEST_SIZE,
}, },
@ -3772,7 +3769,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = MD5_DIGEST_SIZE, .maxauthsize = MD5_DIGEST_SIZE,
}, },
@ -3818,7 +3815,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = SHA1_DIGEST_SIZE, .maxauthsize = SHA1_DIGEST_SIZE,
}, },
@ -3864,7 +3861,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = SHA224_DIGEST_SIZE, .maxauthsize = SHA224_DIGEST_SIZE,
}, },
@ -3910,7 +3907,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = SHA256_DIGEST_SIZE, .maxauthsize = SHA256_DIGEST_SIZE,
}, },
@ -3956,7 +3953,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = SHA384_DIGEST_SIZE, .maxauthsize = SHA384_DIGEST_SIZE,
}, },
@ -4002,7 +3999,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = DES_BLOCK_SIZE, .ivsize = DES_BLOCK_SIZE,
.maxauthsize = SHA512_DIGEST_SIZE, .maxauthsize = SHA512_DIGEST_SIZE,
}, },
@ -4051,7 +4048,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = MD5_DIGEST_SIZE, .maxauthsize = MD5_DIGEST_SIZE,
}, },
@ -4102,7 +4099,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = SHA1_DIGEST_SIZE, .maxauthsize = SHA1_DIGEST_SIZE,
}, },
@ -4153,7 +4150,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = SHA224_DIGEST_SIZE, .maxauthsize = SHA224_DIGEST_SIZE,
}, },
@ -4204,7 +4201,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = SHA256_DIGEST_SIZE, .maxauthsize = SHA256_DIGEST_SIZE,
}, },
@ -4255,7 +4252,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = SHA384_DIGEST_SIZE, .maxauthsize = SHA384_DIGEST_SIZE,
}, },
@ -4306,7 +4303,7 @@ static struct caam_aead_alg driver_aeads[] = {
.setkey = aead_setkey, .setkey = aead_setkey,
.setauthsize = aead_setauthsize, .setauthsize = aead_setauthsize,
.encrypt = aead_encrypt, .encrypt = aead_encrypt,
.decrypt = aead_givdecrypt, .decrypt = aead_decrypt,
.ivsize = CTR_RFC3686_IV_SIZE, .ivsize = CTR_RFC3686_IV_SIZE,
.maxauthsize = SHA512_DIGEST_SIZE, .maxauthsize = SHA512_DIGEST_SIZE,
}, },

View File

@ -459,7 +459,7 @@ static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
} }
pgoff = linear_page_index(vma, pmd_addr); pgoff = linear_page_index(vma, pmd_addr);
phys = pgoff_to_phys(dax_dev, pgoff, PAGE_SIZE); phys = pgoff_to_phys(dax_dev, pgoff, PMD_SIZE);
if (phys == -1) { if (phys == -1) {
dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__, dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__,
pgoff); pgoff);

View File

@ -709,9 +709,10 @@ static int scpi_probe(struct platform_device *pdev)
struct mbox_client *cl = &pchan->cl; struct mbox_client *cl = &pchan->cl;
struct device_node *shmem = of_parse_phandle(np, "shmem", idx); struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
if (of_address_to_resource(shmem, 0, &res)) { ret = of_address_to_resource(shmem, 0, &res);
of_node_put(shmem);
if (ret) {
dev_err(dev, "failed to get SCPI payload mem resource\n"); dev_err(dev, "failed to get SCPI payload mem resource\n");
ret = -EINVAL;
goto err; goto err;
} }

View File

@ -229,14 +229,14 @@ static int __init dmi_id_init(void)
ret = device_register(dmi_dev); ret = device_register(dmi_dev);
if (ret) if (ret)
goto fail_free_dmi_dev; goto fail_put_dmi_dev;
return 0; return 0;
fail_free_dmi_dev: fail_put_dmi_dev:
kfree(dmi_dev); put_device(dmi_dev);
fail_class_unregister:
fail_class_unregister:
class_unregister(&dmi_class); class_unregister(&dmi_class);
return ret; return ret;

View File

@ -1131,6 +1131,7 @@ menu "SPI or I2C GPIO expanders"
config GPIO_MCP23S08 config GPIO_MCP23S08
tristate "Microchip MCP23xxx I/O expander" tristate "Microchip MCP23xxx I/O expander"
depends on OF_GPIO
select GPIOLIB_IRQCHIP select GPIOLIB_IRQCHIP
help help
SPI/I2C driver for Microchip MCP23S08/MCP23S17/MCP23008/MCP23017 SPI/I2C driver for Microchip MCP23S08/MCP23S17/MCP23008/MCP23017

View File

@ -564,7 +564,7 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
mcp->chip.direction_output = mcp23s08_direction_output; mcp->chip.direction_output = mcp23s08_direction_output;
mcp->chip.set = mcp23s08_set; mcp->chip.set = mcp23s08_set;
mcp->chip.dbg_show = mcp23s08_dbg_show; mcp->chip.dbg_show = mcp23s08_dbg_show;
#ifdef CONFIG_OF #ifdef CONFIG_OF_GPIO
mcp->chip.of_gpio_n_cells = 2; mcp->chip.of_gpio_n_cells = 2;
mcp->chip.of_node = dev->of_node; mcp->chip.of_node = dev->of_node;
#endif #endif

View File

@ -155,7 +155,7 @@ static int sa1100_gpio_irqdomain_map(struct irq_domain *d,
{ {
irq_set_chip_and_handler(irq, &sa1100_gpio_irq_chip, irq_set_chip_and_handler(irq, &sa1100_gpio_irq_chip,
handle_edge_irq); handle_edge_irq);
irq_set_noprobe(irq); irq_set_probe(irq);
return 0; return 0;
} }

View File

@ -16,7 +16,6 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-mapping.h>
#include <linux/gpio/consumer.h> #include <linux/gpio/consumer.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>

View File

@ -643,7 +643,7 @@ static int bcm_kona_i2c_xfer(struct i2c_adapter *adapter,
if (rc < 0) { if (rc < 0) {
dev_err(dev->device, dev_err(dev->device,
"restart cmd failed rc = %d\n", rc); "restart cmd failed rc = %d\n", rc);
goto xfer_send_stop; goto xfer_send_stop;
} }
} }

View File

@ -767,7 +767,7 @@ static int cdns_i2c_setclk(unsigned long clk_in, struct cdns_i2c *id)
* depending on the scaling direction. * depending on the scaling direction.
* *
* Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK * Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK
* to acknowedge the change, NOTIFY_DONE if the notification is * to acknowledge the change, NOTIFY_DONE if the notification is
* considered irrelevant. * considered irrelevant.
*/ */
static int cdns_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long static int cdns_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long

View File

@ -367,13 +367,17 @@ int i2c_dw_init(struct dw_i2c_dev *dev)
dev_dbg(dev->dev, "Fast-mode HCNT:LCNT = %d:%d\n", hcnt, lcnt); dev_dbg(dev->dev, "Fast-mode HCNT:LCNT = %d:%d\n", hcnt, lcnt);
/* Configure SDA Hold Time if required */ /* Configure SDA Hold Time if required */
if (dev->sda_hold_time) { reg = dw_readl(dev, DW_IC_COMP_VERSION);
reg = dw_readl(dev, DW_IC_COMP_VERSION); if (reg >= DW_IC_SDA_HOLD_MIN_VERS) {
if (reg >= DW_IC_SDA_HOLD_MIN_VERS) if (dev->sda_hold_time) {
dw_writel(dev, dev->sda_hold_time, DW_IC_SDA_HOLD); dw_writel(dev, dev->sda_hold_time, DW_IC_SDA_HOLD);
else } else {
dev_warn(dev->dev, /* Keep previous hold time setting if no one set it */
"Hardware too old to adjust SDA hold time."); dev->sda_hold_time = dw_readl(dev, DW_IC_SDA_HOLD);
}
} else {
dev_warn(dev->dev,
"Hardware too old to adjust SDA hold time.\n");
} }
/* Configure Tx/Rx FIFO threshold levels */ /* Configure Tx/Rx FIFO threshold levels */

View File

@ -378,7 +378,7 @@ static void rcar_i2c_dma(struct rcar_i2c_priv *priv)
} }
dma_addr = dma_map_single(chan->device->dev, buf, len, dir); dma_addr = dma_map_single(chan->device->dev, buf, len, dir);
if (dma_mapping_error(dev, dma_addr)) { if (dma_mapping_error(chan->device->dev, dma_addr)) {
dev_dbg(dev, "dma map failed, using PIO\n"); dev_dbg(dev, "dma map failed, using PIO\n");
return; return;
} }

View File

@ -918,7 +918,7 @@ static void rk3x_i2c_adapt_div(struct rk3x_i2c *i2c, unsigned long clk_rate)
* Code adapted from i2c-cadence.c. * Code adapted from i2c-cadence.c.
* *
* Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK * Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK
* to acknowedge the change, NOTIFY_DONE if the notification is * to acknowledge the change, NOTIFY_DONE if the notification is
* considered irrelevant. * considered irrelevant.
*/ */
static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
@ -1111,6 +1111,15 @@ static int rk3x_i2c_xfer(struct i2c_adapter *adap,
return ret < 0 ? ret : num; return ret < 0 ? ret : num;
} }
static __maybe_unused int rk3x_i2c_resume(struct device *dev)
{
struct rk3x_i2c *i2c = dev_get_drvdata(dev);
rk3x_i2c_adapt_div(i2c, clk_get_rate(i2c->clk));
return 0;
}
static u32 rk3x_i2c_func(struct i2c_adapter *adap) static u32 rk3x_i2c_func(struct i2c_adapter *adap)
{ {
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_PROTOCOL_MANGLING; return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_PROTOCOL_MANGLING;
@ -1334,12 +1343,15 @@ static int rk3x_i2c_remove(struct platform_device *pdev)
return 0; return 0;
} }
static SIMPLE_DEV_PM_OPS(rk3x_i2c_pm_ops, NULL, rk3x_i2c_resume);
static struct platform_driver rk3x_i2c_driver = { static struct platform_driver rk3x_i2c_driver = {
.probe = rk3x_i2c_probe, .probe = rk3x_i2c_probe,
.remove = rk3x_i2c_remove, .remove = rk3x_i2c_remove,
.driver = { .driver = {
.name = "rk3x-i2c", .name = "rk3x-i2c",
.of_match_table = rk3x_i2c_match, .of_match_table = rk3x_i2c_match,
.pm = &rk3x_i2c_pm_ops,
}, },
}; };

View File

@ -610,7 +610,7 @@ static void sh_mobile_i2c_xfer_dma(struct sh_mobile_i2c_data *pd)
return; return;
dma_addr = dma_map_single(chan->device->dev, pd->msg->buf, pd->msg->len, dir); dma_addr = dma_map_single(chan->device->dev, pd->msg->buf, pd->msg->len, dir);
if (dma_mapping_error(pd->dev, dma_addr)) { if (dma_mapping_error(chan->device->dev, dma_addr)) {
dev_dbg(pd->dev, "dma map failed, using PIO\n"); dev_dbg(pd->dev, "dma map failed, using PIO\n");
return; return;
} }

View File

@ -37,8 +37,6 @@ struct i2c_demux_pinctrl_priv {
struct i2c_demux_pinctrl_chan chan[]; struct i2c_demux_pinctrl_chan chan[];
}; };
static struct property status_okay = { .name = "status", .length = 3, .value = "ok" };
static int i2c_demux_master_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num) static int i2c_demux_master_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
{ {
struct i2c_demux_pinctrl_priv *priv = adap->algo_data; struct i2c_demux_pinctrl_priv *priv = adap->algo_data;
@ -107,6 +105,7 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
of_changeset_revert(&priv->chan[new_chan].chgset); of_changeset_revert(&priv->chan[new_chan].chgset);
err: err:
dev_err(priv->dev, "failed to setup demux-adapter %d (%d)\n", new_chan, ret); dev_err(priv->dev, "failed to setup demux-adapter %d (%d)\n", new_chan, ret);
priv->cur_chan = -EINVAL;
return ret; return ret;
} }
@ -192,6 +191,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
{ {
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
struct i2c_demux_pinctrl_priv *priv; struct i2c_demux_pinctrl_priv *priv;
struct property *props;
int num_chan, i, j, err; int num_chan, i, j, err;
num_chan = of_count_phandle_with_args(np, "i2c-parent", NULL); num_chan = of_count_phandle_with_args(np, "i2c-parent", NULL);
@ -202,7 +202,10 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
priv = devm_kzalloc(&pdev->dev, sizeof(*priv) priv = devm_kzalloc(&pdev->dev, sizeof(*priv)
+ num_chan * sizeof(struct i2c_demux_pinctrl_chan), GFP_KERNEL); + num_chan * sizeof(struct i2c_demux_pinctrl_chan), GFP_KERNEL);
if (!priv)
props = devm_kcalloc(&pdev->dev, num_chan, sizeof(*props), GFP_KERNEL);
if (!priv || !props)
return -ENOMEM; return -ENOMEM;
err = of_property_read_string(np, "i2c-bus-name", &priv->bus_name); err = of_property_read_string(np, "i2c-bus-name", &priv->bus_name);
@ -220,8 +223,12 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
} }
priv->chan[i].parent_np = adap_np; priv->chan[i].parent_np = adap_np;
props[i].name = devm_kstrdup(&pdev->dev, "status", GFP_KERNEL);
props[i].value = devm_kstrdup(&pdev->dev, "ok", GFP_KERNEL);
props[i].length = 3;
of_changeset_init(&priv->chan[i].chgset); of_changeset_init(&priv->chan[i].chgset);
of_changeset_update_property(&priv->chan[i].chgset, adap_np, &status_okay); of_changeset_update_property(&priv->chan[i].chgset, adap_np, &props[i]);
} }
priv->num_chan = num_chan; priv->num_chan = num_chan;

View File

@ -67,6 +67,9 @@
#define BMC150_ACCEL_REG_PMU_BW 0x10 #define BMC150_ACCEL_REG_PMU_BW 0x10
#define BMC150_ACCEL_DEF_BW 125 #define BMC150_ACCEL_DEF_BW 125
#define BMC150_ACCEL_REG_RESET 0x14
#define BMC150_ACCEL_RESET_VAL 0xB6
#define BMC150_ACCEL_REG_INT_MAP_0 0x19 #define BMC150_ACCEL_REG_INT_MAP_0 0x19
#define BMC150_ACCEL_INT_MAP_0_BIT_SLOPE BIT(2) #define BMC150_ACCEL_INT_MAP_0_BIT_SLOPE BIT(2)
@ -1497,6 +1500,14 @@ static int bmc150_accel_chip_init(struct bmc150_accel_data *data)
int ret, i; int ret, i;
unsigned int val; unsigned int val;
/*
* Reset chip to get it in a known good state. A delay of 1.8ms after
* reset is required according to the data sheets of supported chips.
*/
regmap_write(data->regmap, BMC150_ACCEL_REG_RESET,
BMC150_ACCEL_RESET_VAL);
usleep_range(1800, 2500);
ret = regmap_read(data->regmap, BMC150_ACCEL_REG_CHIP_ID, &val); ret = regmap_read(data->regmap, BMC150_ACCEL_REG_CHIP_ID, &val);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Error: Reading chip id\n"); dev_err(dev, "Error: Reading chip id\n");

View File

@ -166,6 +166,7 @@ static int kxsd9_read_raw(struct iio_dev *indio_dev,
ret = spi_w8r8(st->us, KXSD9_READ(KXSD9_REG_CTRL_C)); ret = spi_w8r8(st->us, KXSD9_READ(KXSD9_REG_CTRL_C));
if (ret < 0) if (ret < 0)
goto error_ret; goto error_ret;
*val = 0;
*val2 = kxsd9_micro_scales[ret & KXSD9_FS_MASK]; *val2 = kxsd9_micro_scales[ret & KXSD9_FS_MASK];
ret = IIO_VAL_INT_PLUS_MICRO; ret = IIO_VAL_INT_PLUS_MICRO;
break; break;

View File

@ -56,8 +56,8 @@ static struct {
{HID_USAGE_SENSOR_ALS, 0, 1, 0}, {HID_USAGE_SENSOR_ALS, 0, 1, 0},
{HID_USAGE_SENSOR_ALS, HID_USAGE_SENSOR_UNITS_LUX, 1, 0}, {HID_USAGE_SENSOR_ALS, HID_USAGE_SENSOR_UNITS_LUX, 1, 0},
{HID_USAGE_SENSOR_PRESSURE, 0, 100000, 0}, {HID_USAGE_SENSOR_PRESSURE, 0, 100, 0},
{HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 1, 0}, {HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000},
}; };
static int pow_10(unsigned power) static int pow_10(unsigned power)

View File

@ -110,7 +110,7 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
size_t datum_size; size_t datum_size;
size_t to_wait; size_t to_wait;
int ret; int ret = 0;
if (!indio_dev->info) if (!indio_dev->info)
return -ENODEV; return -ENODEV;
@ -153,7 +153,7 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
ret = rb->access->read_first_n(rb, n, buf); ret = rb->access->read_first_n(rb, n, buf);
if (ret == 0 && (filp->f_flags & O_NONBLOCK)) if (ret == 0 && (filp->f_flags & O_NONBLOCK))
ret = -EAGAIN; ret = -EAGAIN;
} while (ret == 0); } while (ret == 0);
remove_wait_queue(&rb->pollq, &wait); remove_wait_queue(&rb->pollq, &wait);
return ret; return ret;

View File

@ -613,9 +613,8 @@ ssize_t iio_format_value(char *buf, unsigned int type, int size, int *vals)
return sprintf(buf, "%d.%09u\n", vals[0], vals[1]); return sprintf(buf, "%d.%09u\n", vals[0], vals[1]);
case IIO_VAL_FRACTIONAL: case IIO_VAL_FRACTIONAL:
tmp = div_s64((s64)vals[0] * 1000000000LL, vals[1]); tmp = div_s64((s64)vals[0] * 1000000000LL, vals[1]);
vals[1] = do_div(tmp, 1000000000LL); vals[0] = (int)div_s64_rem(tmp, 1000000000, &vals[1]);
vals[0] = tmp; return sprintf(buf, "%d.%09u\n", vals[0], abs(vals[1]));
return sprintf(buf, "%d.%09u\n", vals[0], vals[1]);
case IIO_VAL_FRACTIONAL_LOG2: case IIO_VAL_FRACTIONAL_LOG2:
tmp = (s64)vals[0] * 1000000000LL >> vals[1]; tmp = (s64)vals[0] * 1000000000LL >> vals[1];
vals[1] = do_div(tmp, 1000000000LL); vals[1] = do_div(tmp, 1000000000LL);

View File

@ -106,7 +106,6 @@ struct mcast_group {
atomic_t refcount; atomic_t refcount;
enum mcast_group_state state; enum mcast_group_state state;
struct ib_sa_query *query; struct ib_sa_query *query;
int query_id;
u16 pkey_index; u16 pkey_index;
u8 leave_state; u8 leave_state;
int retries; int retries;
@ -340,11 +339,7 @@ static int send_join(struct mcast_group *group, struct mcast_member *member)
member->multicast.comp_mask, member->multicast.comp_mask,
3000, GFP_KERNEL, join_handler, group, 3000, GFP_KERNEL, join_handler, group,
&group->query); &group->query);
if (ret >= 0) { return (ret > 0) ? 0 : ret;
group->query_id = ret;
ret = 0;
}
return ret;
} }
static int send_leave(struct mcast_group *group, u8 leave_state) static int send_leave(struct mcast_group *group, u8 leave_state)
@ -364,11 +359,7 @@ static int send_leave(struct mcast_group *group, u8 leave_state)
IB_SA_MCMEMBER_REC_JOIN_STATE, IB_SA_MCMEMBER_REC_JOIN_STATE,
3000, GFP_KERNEL, leave_handler, 3000, GFP_KERNEL, leave_handler,
group, &group->query); group, &group->query);
if (ret >= 0) { return (ret > 0) ? 0 : ret;
group->query_id = ret;
ret = 0;
}
return ret;
} }
static void join_group(struct mcast_group *group, struct mcast_member *member, static void join_group(struct mcast_group *group, struct mcast_member *member,

View File

@ -683,7 +683,7 @@ static int build_inv_stag(union t4_wr *wqe, struct ib_send_wr *wr,
return 0; return 0;
} }
void _free_qp(struct kref *kref) static void _free_qp(struct kref *kref)
{ {
struct c4iw_qp *qhp; struct c4iw_qp *qhp;

View File

@ -9490,6 +9490,78 @@ static void init_lcb(struct hfi1_devdata *dd)
write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 0x00); write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 0x00);
} }
/*
* Perform a test read on the QSFP. Return 0 on success, -ERRNO
* on error.
*/
static int test_qsfp_read(struct hfi1_pportdata *ppd)
{
int ret;
u8 status;
/* report success if not a QSFP */
if (ppd->port_type != PORT_TYPE_QSFP)
return 0;
/* read byte 2, the status byte */
ret = one_qsfp_read(ppd, ppd->dd->hfi1_id, 2, &status, 1);
if (ret < 0)
return ret;
if (ret != 1)
return -EIO;
return 0; /* success */
}
/*
* Values for QSFP retry.
*
* Give up after 10s (20 x 500ms). The overall timeout was empirically
* arrived at from experience on a large cluster.
*/
#define MAX_QSFP_RETRIES 20
#define QSFP_RETRY_WAIT 500 /* msec */
/*
* Try a QSFP read. If it fails, schedule a retry for later.
* Called on first link activation after driver load.
*/
static void try_start_link(struct hfi1_pportdata *ppd)
{
if (test_qsfp_read(ppd)) {
/* read failed */
if (ppd->qsfp_retry_count >= MAX_QSFP_RETRIES) {
dd_dev_err(ppd->dd, "QSFP not responding, giving up\n");
return;
}
dd_dev_info(ppd->dd,
"QSFP not responding, waiting and retrying %d\n",
(int)ppd->qsfp_retry_count);
ppd->qsfp_retry_count++;
queue_delayed_work(ppd->hfi1_wq, &ppd->start_link_work,
msecs_to_jiffies(QSFP_RETRY_WAIT));
return;
}
ppd->qsfp_retry_count = 0;
/*
* Tune the SerDes to a ballpark setting for optimal signal and bit
* error rate. Needs to be done before starting the link.
*/
tune_serdes(ppd);
start_link(ppd);
}
/*
* Workqueue function to start the link after a delay.
*/
void handle_start_link(struct work_struct *work)
{
struct hfi1_pportdata *ppd = container_of(work, struct hfi1_pportdata,
start_link_work.work);
try_start_link(ppd);
}
int bringup_serdes(struct hfi1_pportdata *ppd) int bringup_serdes(struct hfi1_pportdata *ppd)
{ {
struct hfi1_devdata *dd = ppd->dd; struct hfi1_devdata *dd = ppd->dd;
@ -9525,14 +9597,8 @@ int bringup_serdes(struct hfi1_pportdata *ppd)
set_qsfp_int_n(ppd, 1); set_qsfp_int_n(ppd, 1);
} }
/* try_start_link(ppd);
* Tune the SerDes to a ballpark setting for return 0;
* optimal signal and bit error rate
* Needs to be done before starting the link
*/
tune_serdes(ppd);
return start_link(ppd);
} }
void hfi1_quiet_serdes(struct hfi1_pportdata *ppd) void hfi1_quiet_serdes(struct hfi1_pportdata *ppd)
@ -9549,6 +9615,10 @@ void hfi1_quiet_serdes(struct hfi1_pportdata *ppd)
ppd->driver_link_ready = 0; ppd->driver_link_ready = 0;
ppd->link_enabled = 0; ppd->link_enabled = 0;
ppd->qsfp_retry_count = MAX_QSFP_RETRIES; /* prevent more retries */
flush_delayed_work(&ppd->start_link_work);
cancel_delayed_work_sync(&ppd->start_link_work);
ppd->offline_disabled_reason = ppd->offline_disabled_reason =
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_SMA_DISABLED); HFI1_ODR_MASK(OPA_LINKDOWN_REASON_SMA_DISABLED);
set_link_down_reason(ppd, OPA_LINKDOWN_REASON_SMA_DISABLED, 0, set_link_down_reason(ppd, OPA_LINKDOWN_REASON_SMA_DISABLED, 0,
@ -12865,7 +12935,7 @@ static int set_up_interrupts(struct hfi1_devdata *dd)
*/ */
static int set_up_context_variables(struct hfi1_devdata *dd) static int set_up_context_variables(struct hfi1_devdata *dd)
{ {
int num_kernel_contexts; unsigned long num_kernel_contexts;
int total_contexts; int total_contexts;
int ret; int ret;
unsigned ngroups; unsigned ngroups;
@ -12894,9 +12964,9 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
*/ */
if (num_kernel_contexts > (dd->chip_send_contexts - num_vls - 1)) { if (num_kernel_contexts > (dd->chip_send_contexts - num_vls - 1)) {
dd_dev_err(dd, dd_dev_err(dd,
"Reducing # kernel rcv contexts to: %d, from %d\n", "Reducing # kernel rcv contexts to: %d, from %lu\n",
(int)(dd->chip_send_contexts - num_vls - 1), (int)(dd->chip_send_contexts - num_vls - 1),
(int)num_kernel_contexts); num_kernel_contexts);
num_kernel_contexts = dd->chip_send_contexts - num_vls - 1; num_kernel_contexts = dd->chip_send_contexts - num_vls - 1;
} }
/* /*

View File

@ -706,6 +706,7 @@ void handle_link_up(struct work_struct *work);
void handle_link_down(struct work_struct *work); void handle_link_down(struct work_struct *work);
void handle_link_downgrade(struct work_struct *work); void handle_link_downgrade(struct work_struct *work);
void handle_link_bounce(struct work_struct *work); void handle_link_bounce(struct work_struct *work);
void handle_start_link(struct work_struct *work);
void handle_sma_message(struct work_struct *work); void handle_sma_message(struct work_struct *work);
void reset_qsfp(struct hfi1_pportdata *ppd); void reset_qsfp(struct hfi1_pportdata *ppd);
void qsfp_event(struct work_struct *work); void qsfp_event(struct work_struct *work);

View File

@ -59,6 +59,40 @@
static struct dentry *hfi1_dbg_root; static struct dentry *hfi1_dbg_root;
/* wrappers to enforce srcu in seq file */
static ssize_t hfi1_seq_read(
struct file *file,
char __user *buf,
size_t size,
loff_t *ppos)
{
struct dentry *d = file->f_path.dentry;
int srcu_idx;
ssize_t r;
r = debugfs_use_file_start(d, &srcu_idx);
if (likely(!r))
r = seq_read(file, buf, size, ppos);
debugfs_use_file_finish(srcu_idx);
return r;
}
static loff_t hfi1_seq_lseek(
struct file *file,
loff_t offset,
int whence)
{
struct dentry *d = file->f_path.dentry;
int srcu_idx;
loff_t r;
r = debugfs_use_file_start(d, &srcu_idx);
if (likely(!r))
r = seq_lseek(file, offset, whence);
debugfs_use_file_finish(srcu_idx);
return r;
}
#define private2dd(file) (file_inode(file)->i_private) #define private2dd(file) (file_inode(file)->i_private)
#define private2ppd(file) (file_inode(file)->i_private) #define private2ppd(file) (file_inode(file)->i_private)
@ -87,8 +121,8 @@ static int _##name##_open(struct inode *inode, struct file *s) \
static const struct file_operations _##name##_file_ops = { \ static const struct file_operations _##name##_file_ops = { \
.owner = THIS_MODULE, \ .owner = THIS_MODULE, \
.open = _##name##_open, \ .open = _##name##_open, \
.read = seq_read, \ .read = hfi1_seq_read, \
.llseek = seq_lseek, \ .llseek = hfi1_seq_lseek, \
.release = seq_release \ .release = seq_release \
} }
@ -105,11 +139,9 @@ do { \
DEBUGFS_FILE_CREATE(#name, parent, data, &_##name##_file_ops, S_IRUGO) DEBUGFS_FILE_CREATE(#name, parent, data, &_##name##_file_ops, S_IRUGO)
static void *_opcode_stats_seq_start(struct seq_file *s, loff_t *pos) static void *_opcode_stats_seq_start(struct seq_file *s, loff_t *pos)
__acquires(RCU)
{ {
struct hfi1_opcode_stats_perctx *opstats; struct hfi1_opcode_stats_perctx *opstats;
rcu_read_lock();
if (*pos >= ARRAY_SIZE(opstats->stats)) if (*pos >= ARRAY_SIZE(opstats->stats))
return NULL; return NULL;
return pos; return pos;
@ -126,9 +158,7 @@ static void *_opcode_stats_seq_next(struct seq_file *s, void *v, loff_t *pos)
} }
static void _opcode_stats_seq_stop(struct seq_file *s, void *v) static void _opcode_stats_seq_stop(struct seq_file *s, void *v)
__releases(RCU)
{ {
rcu_read_unlock();
} }
static int _opcode_stats_seq_show(struct seq_file *s, void *v) static int _opcode_stats_seq_show(struct seq_file *s, void *v)
@ -285,12 +315,10 @@ DEBUGFS_SEQ_FILE_OPEN(qp_stats)
DEBUGFS_FILE_OPS(qp_stats); DEBUGFS_FILE_OPS(qp_stats);
static void *_sdes_seq_start(struct seq_file *s, loff_t *pos) static void *_sdes_seq_start(struct seq_file *s, loff_t *pos)
__acquires(RCU)
{ {
struct hfi1_ibdev *ibd; struct hfi1_ibdev *ibd;
struct hfi1_devdata *dd; struct hfi1_devdata *dd;
rcu_read_lock();
ibd = (struct hfi1_ibdev *)s->private; ibd = (struct hfi1_ibdev *)s->private;
dd = dd_from_dev(ibd); dd = dd_from_dev(ibd);
if (!dd->per_sdma || *pos >= dd->num_sdma) if (!dd->per_sdma || *pos >= dd->num_sdma)
@ -310,9 +338,7 @@ static void *_sdes_seq_next(struct seq_file *s, void *v, loff_t *pos)
} }
static void _sdes_seq_stop(struct seq_file *s, void *v) static void _sdes_seq_stop(struct seq_file *s, void *v)
__releases(RCU)
{ {
rcu_read_unlock();
} }
static int _sdes_seq_show(struct seq_file *s, void *v) static int _sdes_seq_show(struct seq_file *s, void *v)
@ -339,11 +365,9 @@ static ssize_t dev_counters_read(struct file *file, char __user *buf,
struct hfi1_devdata *dd; struct hfi1_devdata *dd;
ssize_t rval; ssize_t rval;
rcu_read_lock();
dd = private2dd(file); dd = private2dd(file);
avail = hfi1_read_cntrs(dd, NULL, &counters); avail = hfi1_read_cntrs(dd, NULL, &counters);
rval = simple_read_from_buffer(buf, count, ppos, counters, avail); rval = simple_read_from_buffer(buf, count, ppos, counters, avail);
rcu_read_unlock();
return rval; return rval;
} }
@ -356,11 +380,9 @@ static ssize_t dev_names_read(struct file *file, char __user *buf,
struct hfi1_devdata *dd; struct hfi1_devdata *dd;
ssize_t rval; ssize_t rval;
rcu_read_lock();
dd = private2dd(file); dd = private2dd(file);
avail = hfi1_read_cntrs(dd, &names, NULL); avail = hfi1_read_cntrs(dd, &names, NULL);
rval = simple_read_from_buffer(buf, count, ppos, names, avail); rval = simple_read_from_buffer(buf, count, ppos, names, avail);
rcu_read_unlock();
return rval; return rval;
} }
@ -383,11 +405,9 @@ static ssize_t portnames_read(struct file *file, char __user *buf,
struct hfi1_devdata *dd; struct hfi1_devdata *dd;
ssize_t rval; ssize_t rval;
rcu_read_lock();
dd = private2dd(file); dd = private2dd(file);
avail = hfi1_read_portcntrs(dd->pport, &names, NULL); avail = hfi1_read_portcntrs(dd->pport, &names, NULL);
rval = simple_read_from_buffer(buf, count, ppos, names, avail); rval = simple_read_from_buffer(buf, count, ppos, names, avail);
rcu_read_unlock();
return rval; return rval;
} }
@ -400,11 +420,9 @@ static ssize_t portcntrs_debugfs_read(struct file *file, char __user *buf,
struct hfi1_pportdata *ppd; struct hfi1_pportdata *ppd;
ssize_t rval; ssize_t rval;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
avail = hfi1_read_portcntrs(ppd, NULL, &counters); avail = hfi1_read_portcntrs(ppd, NULL, &counters);
rval = simple_read_from_buffer(buf, count, ppos, counters, avail); rval = simple_read_from_buffer(buf, count, ppos, counters, avail);
rcu_read_unlock();
return rval; return rval;
} }
@ -434,16 +452,13 @@ static ssize_t asic_flags_read(struct file *file, char __user *buf,
int used; int used;
int i; int i;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
dd = ppd->dd; dd = ppd->dd;
size = PAGE_SIZE; size = PAGE_SIZE;
used = 0; used = 0;
tmp = kmalloc(size, GFP_KERNEL); tmp = kmalloc(size, GFP_KERNEL);
if (!tmp) { if (!tmp)
rcu_read_unlock();
return -ENOMEM; return -ENOMEM;
}
scratch0 = read_csr(dd, ASIC_CFG_SCRATCH); scratch0 = read_csr(dd, ASIC_CFG_SCRATCH);
used += scnprintf(tmp + used, size - used, used += scnprintf(tmp + used, size - used,
@ -470,7 +485,6 @@ static ssize_t asic_flags_read(struct file *file, char __user *buf,
used += scnprintf(tmp + used, size - used, "Write bits to clear\n"); used += scnprintf(tmp + used, size - used, "Write bits to clear\n");
ret = simple_read_from_buffer(buf, count, ppos, tmp, used); ret = simple_read_from_buffer(buf, count, ppos, tmp, used);
rcu_read_unlock();
kfree(tmp); kfree(tmp);
return ret; return ret;
} }
@ -486,15 +500,12 @@ static ssize_t asic_flags_write(struct file *file, const char __user *buf,
u64 scratch0; u64 scratch0;
u64 clear; u64 clear;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
dd = ppd->dd; dd = ppd->dd;
buff = kmalloc(count + 1, GFP_KERNEL); buff = kmalloc(count + 1, GFP_KERNEL);
if (!buff) { if (!buff)
ret = -ENOMEM; return -ENOMEM;
goto do_return;
}
ret = copy_from_user(buff, buf, count); ret = copy_from_user(buff, buf, count);
if (ret > 0) { if (ret > 0) {
@ -527,8 +538,6 @@ static ssize_t asic_flags_write(struct file *file, const char __user *buf,
do_free: do_free:
kfree(buff); kfree(buff);
do_return:
rcu_read_unlock();
return ret; return ret;
} }
@ -542,18 +551,14 @@ static ssize_t qsfp_debugfs_dump(struct file *file, char __user *buf,
char *tmp; char *tmp;
int ret; int ret;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
tmp = kmalloc(PAGE_SIZE, GFP_KERNEL); tmp = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!tmp) { if (!tmp)
rcu_read_unlock();
return -ENOMEM; return -ENOMEM;
}
ret = qsfp_dump(ppd, tmp, PAGE_SIZE); ret = qsfp_dump(ppd, tmp, PAGE_SIZE);
if (ret > 0) if (ret > 0)
ret = simple_read_from_buffer(buf, count, ppos, tmp, ret); ret = simple_read_from_buffer(buf, count, ppos, tmp, ret);
rcu_read_unlock();
kfree(tmp); kfree(tmp);
return ret; return ret;
} }
@ -569,7 +574,6 @@ static ssize_t __i2c_debugfs_write(struct file *file, const char __user *buf,
int offset; int offset;
int total_written; int total_written;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
/* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */ /* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */
@ -577,16 +581,12 @@ static ssize_t __i2c_debugfs_write(struct file *file, const char __user *buf,
offset = *ppos & 0xffff; offset = *ppos & 0xffff;
/* explicitly reject invalid address 0 to catch cp and cat */ /* explicitly reject invalid address 0 to catch cp and cat */
if (i2c_addr == 0) { if (i2c_addr == 0)
ret = -EINVAL; return -EINVAL;
goto _return;
}
buff = kmalloc(count, GFP_KERNEL); buff = kmalloc(count, GFP_KERNEL);
if (!buff) { if (!buff)
ret = -ENOMEM; return -ENOMEM;
goto _return;
}
ret = copy_from_user(buff, buf, count); ret = copy_from_user(buff, buf, count);
if (ret > 0) { if (ret > 0) {
@ -606,8 +606,6 @@ static ssize_t __i2c_debugfs_write(struct file *file, const char __user *buf,
_free: _free:
kfree(buff); kfree(buff);
_return:
rcu_read_unlock();
return ret; return ret;
} }
@ -636,7 +634,6 @@ static ssize_t __i2c_debugfs_read(struct file *file, char __user *buf,
int offset; int offset;
int total_read; int total_read;
rcu_read_lock();
ppd = private2ppd(file); ppd = private2ppd(file);
/* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */ /* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */
@ -644,16 +641,12 @@ static ssize_t __i2c_debugfs_read(struct file *file, char __user *buf,
offset = *ppos & 0xffff; offset = *ppos & 0xffff;
/* explicitly reject invalid address 0 to catch cp and cat */ /* explicitly reject invalid address 0 to catch cp and cat */
if (i2c_addr == 0) { if (i2c_addr == 0)
ret = -EINVAL; return -EINVAL;
goto _return;
}
buff = kmalloc(count, GFP_KERNEL); buff = kmalloc(count, GFP_KERNEL);
if (!buff) { if (!buff)
ret = -ENOMEM; return -ENOMEM;
goto _return;
}
total_read = i2c_read(ppd, target, i2c_addr, offset, buff, count); total_read = i2c_read(ppd, target, i2c_addr, offset, buff, count);
if (total_read < 0) { if (total_read < 0) {
@ -673,8 +666,6 @@ static ssize_t __i2c_debugfs_read(struct file *file, char __user *buf,
_free: _free:
kfree(buff); kfree(buff);
_return:
rcu_read_unlock();
return ret; return ret;
} }
@ -701,26 +692,20 @@ static ssize_t __qsfp_debugfs_write(struct file *file, const char __user *buf,
int ret; int ret;
int total_written; int total_written;
rcu_read_lock(); if (*ppos + count > QSFP_PAGESIZE * 4) /* base page + page00-page03 */
if (*ppos + count > QSFP_PAGESIZE * 4) { /* base page + page00-page03 */ return -EINVAL;
ret = -EINVAL;
goto _return;
}
ppd = private2ppd(file); ppd = private2ppd(file);
buff = kmalloc(count, GFP_KERNEL); buff = kmalloc(count, GFP_KERNEL);
if (!buff) { if (!buff)
ret = -ENOMEM; return -ENOMEM;
goto _return;
}
ret = copy_from_user(buff, buf, count); ret = copy_from_user(buff, buf, count);
if (ret > 0) { if (ret > 0) {
ret = -EFAULT; ret = -EFAULT;
goto _free; goto _free;
} }
total_written = qsfp_write(ppd, target, *ppos, buff, count); total_written = qsfp_write(ppd, target, *ppos, buff, count);
if (total_written < 0) { if (total_written < 0) {
ret = total_written; ret = total_written;
@ -733,8 +718,6 @@ static ssize_t __qsfp_debugfs_write(struct file *file, const char __user *buf,
_free: _free:
kfree(buff); kfree(buff);
_return:
rcu_read_unlock();
return ret; return ret;
} }
@ -761,7 +744,6 @@ static ssize_t __qsfp_debugfs_read(struct file *file, char __user *buf,
int ret; int ret;
int total_read; int total_read;
rcu_read_lock();
if (*ppos + count > QSFP_PAGESIZE * 4) { /* base page + page00-page03 */ if (*ppos + count > QSFP_PAGESIZE * 4) { /* base page + page00-page03 */
ret = -EINVAL; ret = -EINVAL;
goto _return; goto _return;
@ -794,7 +776,6 @@ static ssize_t __qsfp_debugfs_read(struct file *file, char __user *buf,
_free: _free:
kfree(buff); kfree(buff);
_return: _return:
rcu_read_unlock();
return ret; return ret;
} }
@ -1010,7 +991,6 @@ void hfi1_dbg_ibdev_exit(struct hfi1_ibdev *ibd)
debugfs_remove_recursive(ibd->hfi1_ibdev_dbg); debugfs_remove_recursive(ibd->hfi1_ibdev_dbg);
out: out:
ibd->hfi1_ibdev_dbg = NULL; ibd->hfi1_ibdev_dbg = NULL;
synchronize_rcu();
} }
/* /*
@ -1035,9 +1015,7 @@ static const char * const hfi1_statnames[] = {
}; };
static void *_driver_stats_names_seq_start(struct seq_file *s, loff_t *pos) static void *_driver_stats_names_seq_start(struct seq_file *s, loff_t *pos)
__acquires(RCU)
{ {
rcu_read_lock();
if (*pos >= ARRAY_SIZE(hfi1_statnames)) if (*pos >= ARRAY_SIZE(hfi1_statnames))
return NULL; return NULL;
return pos; return pos;
@ -1055,9 +1033,7 @@ static void *_driver_stats_names_seq_next(
} }
static void _driver_stats_names_seq_stop(struct seq_file *s, void *v) static void _driver_stats_names_seq_stop(struct seq_file *s, void *v)
__releases(RCU)
{ {
rcu_read_unlock();
} }
static int _driver_stats_names_seq_show(struct seq_file *s, void *v) static int _driver_stats_names_seq_show(struct seq_file *s, void *v)
@ -1073,9 +1049,7 @@ DEBUGFS_SEQ_FILE_OPEN(driver_stats_names)
DEBUGFS_FILE_OPS(driver_stats_names); DEBUGFS_FILE_OPS(driver_stats_names);
static void *_driver_stats_seq_start(struct seq_file *s, loff_t *pos) static void *_driver_stats_seq_start(struct seq_file *s, loff_t *pos)
__acquires(RCU)
{ {
rcu_read_lock();
if (*pos >= ARRAY_SIZE(hfi1_statnames)) if (*pos >= ARRAY_SIZE(hfi1_statnames))
return NULL; return NULL;
return pos; return pos;
@ -1090,9 +1064,7 @@ static void *_driver_stats_seq_next(struct seq_file *s, void *v, loff_t *pos)
} }
static void _driver_stats_seq_stop(struct seq_file *s, void *v) static void _driver_stats_seq_stop(struct seq_file *s, void *v)
__releases(RCU)
{ {
rcu_read_unlock();
} }
static u64 hfi1_sps_ints(void) static u64 hfi1_sps_ints(void)

View File

@ -605,6 +605,7 @@ struct hfi1_pportdata {
struct work_struct freeze_work; struct work_struct freeze_work;
struct work_struct link_downgrade_work; struct work_struct link_downgrade_work;
struct work_struct link_bounce_work; struct work_struct link_bounce_work;
struct delayed_work start_link_work;
/* host link state variables */ /* host link state variables */
struct mutex hls_lock; struct mutex hls_lock;
u32 host_link_state; u32 host_link_state;
@ -659,6 +660,7 @@ struct hfi1_pportdata {
u8 linkinit_reason; u8 linkinit_reason;
u8 local_tx_rate; /* rate given to 8051 firmware */ u8 local_tx_rate; /* rate given to 8051 firmware */
u8 last_pstate; /* info only */ u8 last_pstate; /* info only */
u8 qsfp_retry_count;
/* placeholders for IB MAD packet settings */ /* placeholders for IB MAD packet settings */
u8 overrun_threshold; u8 overrun_threshold;
@ -1804,7 +1806,7 @@ extern unsigned int hfi1_max_mtu;
extern unsigned int hfi1_cu; extern unsigned int hfi1_cu;
extern unsigned int user_credit_return_threshold; extern unsigned int user_credit_return_threshold;
extern int num_user_contexts; extern int num_user_contexts;
extern unsigned n_krcvqs; extern unsigned long n_krcvqs;
extern uint krcvqs[]; extern uint krcvqs[];
extern int krcvqsset; extern int krcvqsset;
extern uint kdeth_qp; extern uint kdeth_qp;

View File

@ -94,7 +94,7 @@ module_param_array(krcvqs, uint, &krcvqsset, S_IRUGO);
MODULE_PARM_DESC(krcvqs, "Array of the number of non-control kernel receive queues by VL"); MODULE_PARM_DESC(krcvqs, "Array of the number of non-control kernel receive queues by VL");
/* computed based on above array */ /* computed based on above array */
unsigned n_krcvqs; unsigned long n_krcvqs;
static unsigned hfi1_rcvarr_split = 25; static unsigned hfi1_rcvarr_split = 25;
module_param_named(rcvarr_split, hfi1_rcvarr_split, uint, S_IRUGO); module_param_named(rcvarr_split, hfi1_rcvarr_split, uint, S_IRUGO);
@ -500,6 +500,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
INIT_WORK(&ppd->link_downgrade_work, handle_link_downgrade); INIT_WORK(&ppd->link_downgrade_work, handle_link_downgrade);
INIT_WORK(&ppd->sma_message_work, handle_sma_message); INIT_WORK(&ppd->sma_message_work, handle_sma_message);
INIT_WORK(&ppd->link_bounce_work, handle_link_bounce); INIT_WORK(&ppd->link_bounce_work, handle_link_bounce);
INIT_DELAYED_WORK(&ppd->start_link_work, handle_start_link);
INIT_WORK(&ppd->linkstate_active_work, receive_interrupt_work); INIT_WORK(&ppd->linkstate_active_work, receive_interrupt_work);
INIT_WORK(&ppd->qsfp_info.qsfp_work, qsfp_event); INIT_WORK(&ppd->qsfp_info.qsfp_work, qsfp_event);

View File

@ -2604,7 +2604,7 @@ static int pma_get_opa_datacounters(struct opa_pma_mad *pmp,
u8 lq, num_vls; u8 lq, num_vls;
u8 res_lli, res_ler; u8 res_lli, res_ler;
u64 port_mask; u64 port_mask;
unsigned long port_num; u8 port_num;
unsigned long vl; unsigned long vl;
u32 vl_select_mask; u32 vl_select_mask;
int vfi; int vfi;
@ -2638,9 +2638,9 @@ static int pma_get_opa_datacounters(struct opa_pma_mad *pmp,
*/ */
port_mask = be64_to_cpu(req->port_select_mask[3]); port_mask = be64_to_cpu(req->port_select_mask[3]);
port_num = find_first_bit((unsigned long *)&port_mask, port_num = find_first_bit((unsigned long *)&port_mask,
sizeof(port_mask)); sizeof(port_mask) * 8);
if ((u8)port_num != port) { if (port_num != port) {
pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD;
return reply((struct ib_mad_hdr *)pmp); return reply((struct ib_mad_hdr *)pmp);
} }
@ -2842,7 +2842,7 @@ static int pma_get_opa_porterrors(struct opa_pma_mad *pmp,
*/ */
port_mask = be64_to_cpu(req->port_select_mask[3]); port_mask = be64_to_cpu(req->port_select_mask[3]);
port_num = find_first_bit((unsigned long *)&port_mask, port_num = find_first_bit((unsigned long *)&port_mask,
sizeof(port_mask)); sizeof(port_mask) * 8);
if (port_num != port) { if (port_num != port) {
pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD;
@ -3015,7 +3015,7 @@ static int pma_get_opa_errorinfo(struct opa_pma_mad *pmp,
*/ */
port_mask = be64_to_cpu(req->port_select_mask[3]); port_mask = be64_to_cpu(req->port_select_mask[3]);
port_num = find_first_bit((unsigned long *)&port_mask, port_num = find_first_bit((unsigned long *)&port_mask,
sizeof(port_mask)); sizeof(port_mask) * 8);
if (port_num != port) { if (port_num != port) {
pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD;
@ -3252,7 +3252,7 @@ static int pma_set_opa_errorinfo(struct opa_pma_mad *pmp,
*/ */
port_mask = be64_to_cpu(req->port_select_mask[3]); port_mask = be64_to_cpu(req->port_select_mask[3]);
port_num = find_first_bit((unsigned long *)&port_mask, port_num = find_first_bit((unsigned long *)&port_mask,
sizeof(port_mask)); sizeof(port_mask) * 8);
if (port_num != port) { if (port_num != port) {
pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD;

View File

@ -771,6 +771,9 @@ void seg_pio_copy_mid(struct pio_buf *pbuf, const void *from, size_t nbytes)
read_extra_bytes(pbuf, from, to_fill); read_extra_bytes(pbuf, from, to_fill);
from += to_fill; from += to_fill;
nbytes -= to_fill; nbytes -= to_fill;
/* may not be enough valid bytes left to align */
if (extra > nbytes)
extra = nbytes;
/* ...now write carry */ /* ...now write carry */
dest = pbuf->start + (pbuf->qw_written * sizeof(u64)); dest = pbuf->start + (pbuf->qw_written * sizeof(u64));
@ -798,6 +801,15 @@ void seg_pio_copy_mid(struct pio_buf *pbuf, const void *from, size_t nbytes)
read_low_bytes(pbuf, from, extra); read_low_bytes(pbuf, from, extra);
from += extra; from += extra;
nbytes -= extra; nbytes -= extra;
/*
* If no bytes are left, return early - we are done.
* NOTE: This short-circuit is *required* because
* "extra" may have been reduced in size and "from"
* is not aligned, as required when leaving this
* if block.
*/
if (nbytes == 0)
return;
} }
/* at this point, from is QW aligned */ /* at this point, from is QW aligned */

View File

@ -114,6 +114,8 @@ MODULE_PARM_DESC(sdma_comp_size, "Size of User SDMA completion ring. Default: 12
#define KDETH_HCRC_LOWER_SHIFT 24 #define KDETH_HCRC_LOWER_SHIFT 24
#define KDETH_HCRC_LOWER_MASK 0xff #define KDETH_HCRC_LOWER_MASK 0xff
#define AHG_KDETH_INTR_SHIFT 12
#define PBC2LRH(x) ((((x) & 0xfff) << 2) - 4) #define PBC2LRH(x) ((((x) & 0xfff) << 2) - 4)
#define LRH2PBC(x) ((((x) >> 2) + 1) & 0xfff) #define LRH2PBC(x) ((((x) >> 2) + 1) & 0xfff)
@ -1480,7 +1482,8 @@ static int set_txreq_header_ahg(struct user_sdma_request *req,
/* Clear KDETH.SH on last packet */ /* Clear KDETH.SH on last packet */
if (unlikely(tx->flags & TXREQ_FLAGS_REQ_LAST_PKT)) { if (unlikely(tx->flags & TXREQ_FLAGS_REQ_LAST_PKT)) {
val |= cpu_to_le16(KDETH_GET(hdr->kdeth.ver_tid_offset, val |= cpu_to_le16(KDETH_GET(hdr->kdeth.ver_tid_offset,
INTR) >> 16); INTR) <<
AHG_KDETH_INTR_SHIFT);
val &= cpu_to_le16(~(1U << 13)); val &= cpu_to_le16(~(1U << 13));
AHG_HEADER_SET(req->ahg, diff, 7, 16, 14, val); AHG_HEADER_SET(req->ahg, diff, 7, 16, 14, val);
} else { } else {

View File

@ -265,6 +265,7 @@ void i40iw_next_iw_state(struct i40iw_qp *iwqp,
info.dont_send_fin = false; info.dont_send_fin = false;
if (iwqp->sc_qp.term_flags && (state == I40IW_QP_STATE_ERROR)) if (iwqp->sc_qp.term_flags && (state == I40IW_QP_STATE_ERROR))
info.reset_tcp_conn = true; info.reset_tcp_conn = true;
iwqp->hw_iwarp_state = state;
i40iw_hw_modify_qp(iwqp->iwdev, iwqp, &info, 0); i40iw_hw_modify_qp(iwqp->iwdev, iwqp, &info, 0);
} }

View File

@ -100,7 +100,7 @@ static struct notifier_block i40iw_net_notifier = {
.notifier_call = i40iw_net_event .notifier_call = i40iw_net_event
}; };
static int i40iw_notifiers_registered; static atomic_t i40iw_notifiers_registered;
/** /**
* i40iw_find_i40e_handler - find a handler given a client info * i40iw_find_i40e_handler - find a handler given a client info
@ -1342,12 +1342,11 @@ static enum i40iw_status_code i40iw_initialize_dev(struct i40iw_device *iwdev,
*/ */
static void i40iw_register_notifiers(void) static void i40iw_register_notifiers(void)
{ {
if (!i40iw_notifiers_registered) { if (atomic_inc_return(&i40iw_notifiers_registered) == 1) {
register_inetaddr_notifier(&i40iw_inetaddr_notifier); register_inetaddr_notifier(&i40iw_inetaddr_notifier);
register_inet6addr_notifier(&i40iw_inetaddr6_notifier); register_inet6addr_notifier(&i40iw_inetaddr6_notifier);
register_netevent_notifier(&i40iw_net_notifier); register_netevent_notifier(&i40iw_net_notifier);
} }
i40iw_notifiers_registered++;
} }
/** /**
@ -1429,8 +1428,7 @@ static void i40iw_deinit_device(struct i40iw_device *iwdev, bool reset, bool del
i40iw_del_macip_entry(iwdev, (u8)iwdev->mac_ip_table_idx); i40iw_del_macip_entry(iwdev, (u8)iwdev->mac_ip_table_idx);
/* fallthrough */ /* fallthrough */
case INET_NOTIFIER: case INET_NOTIFIER:
if (i40iw_notifiers_registered > 0) { if (!atomic_dec_return(&i40iw_notifiers_registered)) {
i40iw_notifiers_registered--;
unregister_netevent_notifier(&i40iw_net_notifier); unregister_netevent_notifier(&i40iw_net_notifier);
unregister_inetaddr_notifier(&i40iw_inetaddr_notifier); unregister_inetaddr_notifier(&i40iw_inetaddr_notifier);
unregister_inet6addr_notifier(&i40iw_inetaddr6_notifier); unregister_inet6addr_notifier(&i40iw_inetaddr6_notifier);

View File

@ -687,12 +687,6 @@ static int mlx4_ib_poll_one(struct mlx4_ib_cq *cq,
is_error = (cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == is_error = (cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) ==
MLX4_CQE_OPCODE_ERROR; MLX4_CQE_OPCODE_ERROR;
if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == MLX4_OPCODE_NOP &&
is_send)) {
pr_warn("Completion for NOP opcode detected!\n");
return -EAGAIN;
}
/* Resize CQ in progress */ /* Resize CQ in progress */
if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == MLX4_CQE_OPCODE_RESIZE)) { if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == MLX4_CQE_OPCODE_RESIZE)) {
if (cq->resize_buf) { if (cq->resize_buf) {
@ -718,12 +712,6 @@ static int mlx4_ib_poll_one(struct mlx4_ib_cq *cq,
*/ */
mqp = __mlx4_qp_lookup(to_mdev(cq->ibcq.device)->dev, mqp = __mlx4_qp_lookup(to_mdev(cq->ibcq.device)->dev,
be32_to_cpu(cqe->vlan_my_qpn)); be32_to_cpu(cqe->vlan_my_qpn));
if (unlikely(!mqp)) {
pr_warn("CQ %06x with entry for unknown QPN %06x\n",
cq->mcq.cqn, be32_to_cpu(cqe->vlan_my_qpn) & MLX4_CQE_QPN_MASK);
return -EAGAIN;
}
*cur_qp = to_mibqp(mqp); *cur_qp = to_mibqp(mqp);
} }
@ -736,11 +724,6 @@ static int mlx4_ib_poll_one(struct mlx4_ib_cq *cq,
/* SRQ is also in the radix tree */ /* SRQ is also in the radix tree */
msrq = mlx4_srq_lookup(to_mdev(cq->ibcq.device)->dev, msrq = mlx4_srq_lookup(to_mdev(cq->ibcq.device)->dev,
srq_num); srq_num);
if (unlikely(!msrq)) {
pr_warn("CQ %06x with entry for unknown SRQN %06x\n",
cq->mcq.cqn, srq_num);
return -EAGAIN;
}
} }
if (is_send) { if (is_send) {
@ -891,7 +874,6 @@ int mlx4_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
struct mlx4_ib_qp *cur_qp = NULL; struct mlx4_ib_qp *cur_qp = NULL;
unsigned long flags; unsigned long flags;
int npolled; int npolled;
int err = 0;
struct mlx4_ib_dev *mdev = to_mdev(cq->ibcq.device); struct mlx4_ib_dev *mdev = to_mdev(cq->ibcq.device);
spin_lock_irqsave(&cq->lock, flags); spin_lock_irqsave(&cq->lock, flags);
@ -901,8 +883,7 @@ int mlx4_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
} }
for (npolled = 0; npolled < num_entries; ++npolled) { for (npolled = 0; npolled < num_entries; ++npolled) {
err = mlx4_ib_poll_one(cq, &cur_qp, wc + npolled); if (mlx4_ib_poll_one(cq, &cur_qp, wc + npolled))
if (err)
break; break;
} }
@ -911,10 +892,7 @@ int mlx4_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
out: out:
spin_unlock_irqrestore(&cq->lock, flags); spin_unlock_irqrestore(&cq->lock, flags);
if (err == 0 || err == -EAGAIN) return npolled;
return npolled;
else
return err;
} }
int mlx4_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) int mlx4_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)

View File

@ -553,12 +553,6 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
* from the table. * from the table.
*/ */
mqp = __mlx5_qp_lookup(dev->mdev, qpn); mqp = __mlx5_qp_lookup(dev->mdev, qpn);
if (unlikely(!mqp)) {
mlx5_ib_warn(dev, "CQE@CQ %06x for unknown QPN %6x\n",
cq->mcq.cqn, qpn);
return -EINVAL;
}
*cur_qp = to_mibqp(mqp); *cur_qp = to_mibqp(mqp);
} }
@ -619,13 +613,6 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
read_lock(&dev->mdev->priv.mkey_table.lock); read_lock(&dev->mdev->priv.mkey_table.lock);
mmkey = __mlx5_mr_lookup(dev->mdev, mmkey = __mlx5_mr_lookup(dev->mdev,
mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey))); mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey)));
if (unlikely(!mmkey)) {
read_unlock(&dev->mdev->priv.mkey_table.lock);
mlx5_ib_warn(dev, "CQE@CQ %06x for unknown MR %6x\n",
cq->mcq.cqn, be32_to_cpu(sig_err_cqe->mkey));
return -EINVAL;
}
mr = to_mibmr(mmkey); mr = to_mibmr(mmkey);
get_sig_err_item(sig_err_cqe, &mr->sig->err_item); get_sig_err_item(sig_err_cqe, &mr->sig->err_item);
mr->sig->sig_err_exists = true; mr->sig->sig_err_exists = true;
@ -676,7 +663,6 @@ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
unsigned long flags; unsigned long flags;
int soft_polled = 0; int soft_polled = 0;
int npolled; int npolled;
int err = 0;
spin_lock_irqsave(&cq->lock, flags); spin_lock_irqsave(&cq->lock, flags);
if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
@ -688,8 +674,7 @@ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
soft_polled = poll_soft_wc(cq, num_entries, wc); soft_polled = poll_soft_wc(cq, num_entries, wc);
for (npolled = 0; npolled < num_entries - soft_polled; npolled++) { for (npolled = 0; npolled < num_entries - soft_polled; npolled++) {
err = mlx5_poll_one(cq, &cur_qp, wc + soft_polled + npolled); if (mlx5_poll_one(cq, &cur_qp, wc + soft_polled + npolled))
if (err)
break; break;
} }
@ -698,10 +683,7 @@ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
out: out:
spin_unlock_irqrestore(&cq->lock, flags); spin_unlock_irqrestore(&cq->lock, flags);
if (err == 0 || err == -EAGAIN) return soft_polled + npolled;
return soft_polled + npolled;
else
return err;
} }
int mlx5_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) int mlx5_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)

Some files were not shown because too many files have changed in this diff Show More