When stopping the port, the CPU notifier are still there whereas the
mvneta_stop_dev function calls mvneta_percpu_disable() on each CPUs.
It was possible to have a new CPU coming at this point which could be
racy.
This patch adds a flag preventing executing the code notifier for a new
CPU when the port is stopping. It also uses the spinlock introduces
previously. To avoid the deadlock, the lock has been moved outside the
mvneta_percpu_elect function.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Electing a CPU must be done in an atomic way: it should be done after or
before the removal/insertion of a CPU and this function is not reentrant.
During the loop of mvneta_percpu_elect we associates the queues to the
CPUs, if there is a topology change during this loop, then the mapping
between the CPUs and the queues could be wrong. During this loop the
interrupt mask is also updating for each CPUs, It should not be changed
in the same time by other part of the driver.
This patch adds spinlock to create the needed critical sections.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the MVNETA_INTR_* registers, the queues related fields are per cpu,
according to the datasheet (comment in [] are added by me):
"In a multi-CPU system, bits of RX[or TX] queues for which the access by
the reading[or writing] CPU is disabled are read as 0, and cannot be
cleared[or written]."
That means that each time we want to manipulate these bits we had to do
it on each cpu and not only on the current cpu.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the commit 2dcf75e279 ("net: mvneta: Associate RX queues with
each CPU") all the percpu irq are used and disabled at initialization, so
there is no point to disable them first.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of using a for_each_* loop in which we just call the
smp_call_function_single macro, it is more simple to directly use the
on_each_cpu macro. Moreover, this macro ensures that the calls will be
done all at once.
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When passing to the management of multiple RX queue, the
mvneta_percpu_elect function was broken. The use of the modulo can lead
to elect the wrong cpu. For example with rxq_def=2, if the CPU 2 goes
offline and then online, we ended with the third RX queue activated in
the same time on CPU 0 and CPU2, which lead to a kernel crash.
With this fix, we don't try to get "the closer" CPU if the default CPU is
gone, now we just use CPU 0 which always be there. Thanks to this, the
code becomes more readable, easier to maintain and more predicable.
Cc: stable@vger.kernel.org
Fixes: 2dcf75e279 ("net: mvneta: Associate RX queues with each CPU")
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch convert the for_each_present in on_each_cpu, instead of
applying on the present cpus it will be applied only on the online cpus.
This fix a bug reported on
http://thread.gmane.org/gmane.linux.ports.arm.kernel/468173.
Using the macro on_each_cpu (instead of a for_each_* loop) also ensures
that all the calls will be done all at once.
Fixes: f864288544 ("net: mvneta: Statically assign queues to CPUs")
Reported-by: Stefan Roese <stefan.roese@gmail.com>
Suggested-by: Jisheng Zhang <jszhang@marvell.com>
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The return value of kzalloc on failure of allocation of memory should
be -ENOMEM and not -1.
Found using Coccinelle. A simplified version of the semantic patch
used is:
//<smpl>
@@
expression *e;
position p,q;
@@
e@q = kzalloc(...);
if@p (e == NULL) {
...
return
- -1
+ -ENOMEM
;
}
//</smpl>
This function may also return -1 after calling mpp2_prs_tcam_port_map_get.
So that the function consistently returns meaningful error values on
failure, the -1 is changed to -EINVAL.
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The code in txq_put_data() would use txq->tx_curr_desc to index the
tso_hdrs/tso_hdrs_dma buffers, for less than 8 bytes unaligned
fragments, which is already moved to the next descriptor at the
beginning of the function.
If that fragment was the last of the the skb, the next skb would use
that same space to place the ip headers, overwritting that small
fragment data.
Fixes: 91986fd3d3 (net: mv643xx_eth: Ensure proper data alignment in TSO TX path)
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Reviewed-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some platforms may provide more than one clk for the mvneta IP, for
example Marvell BG4CT provides one clk for the mac core, and one
clk for the AXI bus logic. Obviously this bus clk also need to
be enabled. This patch adds this optional "bus" clk support.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some platforms may provide more than one clk for the mvneta IP, for
example Marvell BG4CT provides one clk for the mac core, and one
clk for the AXI bus logic.
To support for more than one clock, we'll need to distinguish between
the clock by name. Change clock probing to first try to get "core"
clock before falling back to unnamed clock.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sorting the headers in alphabetic order will help to reduce the conflict
when adding new headers in the future.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When s->type is T_REG_64, the high 32bits are lost in val. This patch
fixes this trivial issue.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Fixes: 9b0cdefa4c ("net: mvneta: add ethtool statistics")
Signed-off-by: David S. Miller <davem@davemloft.net>
Not all devices attached to an MDIO bus are phys. So add an
mdio_device structure to represent the generic parts of an mdio
device, and place this structure into the phy_device.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Have mdio_alloc() create the array of interrupt numbers, and
initialize it to POLLING. This is what most MDIO drivers want, so
allowing code to be removed from the drivers.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/net/geneve.c
Here we had an overlapping change, where in 'net' the extraneous stats
bump was being removed whilst in 'net-next' the final argument to
udp_tunnel6_xmit_skb() was being changed.
Signed-off-by: David S. Miller <davem@davemloft.net>
The name NETIF_F_ALL_CSUM is a misnomer. This does not correspond to the
set of features for offloading all checksums. This is a mask of the
checksum offload related features bits. It is incorrect to set both
NETIF_F_HW_CSUM and NETIF_F_IP_CSUM or NETIF_F_IPV6 at the same time for
features of a device.
This patch:
- Changes instances of NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK (where
NETIF_F_ALL_CSUM is being used as a mask).
- Changes bonding, sfc/efx, ipvlan, macvlan, vlan, and team drivers to
use NEITF_F_HW_CSUM in features list instead of NETIF_F_ALL_CSUM.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With this patch each CPU is associated with its own set of TX queues.
It also setup the XPS with an initial configuration which set the
affinity matching the hardware configuration.
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the support for the RSS related ethtool
function. Currently it only uses one entry in the indirection table which
allows associating an mvneta interface to a given CPU.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Tested-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We enable the percpu interrupt for all the CPU and we just associate a
CPU to a few queue at the neta level. The mapping between the CPUs and
the queues is static. The queues are associated to the CPU module the
number of CPUs. However currently we only use on RX queue for a given
Ethernet port.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of using the same default queue for all the port. Move it in the
port struct. It will allow have a different default queue for each port.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In hitherto code in case of RX buffer allocation error during refill,
original buffer is pushed to the network stack, but the amount of
available buffer pointers in BM pool is decreased.
This commit fixes the situation by moving refill call before skb_put(),
and returning original buffer pointer to the pool in case of an error.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
Each allocated buffer, whose pointer is put into BM pool is DMA-mapped.
Hence it should be properly unmapped after usage or when removing buffers
from pool.
This commit fixes DMA handling on RX path by adding dma_unmap_single() in
mvpp2_rx() and in mvpp2_bufs_free(). The latter function's argument number
had to be increased for this purpose.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
The Tx descriptor release code currently calls dma_unmap_single() and
dev_kfree_skb_any() if the descriptor is associated with a non-NULL skb.
This condition is true only for the last fragment of the packet.
Since every descriptor's buffer is DMA-mapped it has to be properly
unmapped.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/net/ethernet/renesas/ravb_main.c
kernel/bpf/syscall.c
net/ipv4/ipmr.c
All three conflicts were cases of overlapping changes.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allows to do
ethtool -s eth0 autoneg off
ethtool -s eth0 autoneg on
to disable or enable autonegotiation at run-time.
Without that functionality, the only way to control the autonegotiation
is to modify the device tree.
This is needed if you plan to use the same kernel with
different ethernet switches, the ones that support the in-band
status and the ones that not.
CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: netdev@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Stas Sergeev <stsp@users.sourceforge.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This moves autoneg-related bit manipulations to the single place.
CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: netdev@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Stas Sergeev <stsp@users.sourceforge.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
These new helpers simplify implementing multi-driver modules and
properly handle failure to register one driver by unregistering all
previously registered drivers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since Armada 38x SoC can support IP checksum for jumbo frames only on
a single port, it means that this feature should be enabled per-port,
rather than for the whole SoC.
This patch enables setting custom TX IP checksum limit by adding new
optional property to the mvneta device tree node. If not used, by
default 1600B is set for "marvell,armada-370-neta" and 9800B for other
strings, which ensures backward compatibility. Binding documentation
is updated accordingly.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the actual RX processing, there is same error path for both descriptor
ring refilling and building skb fails. This is not correct, because after
successful refill, the ring is already updated with newly allocated
buffer. Then, in case of build_skb() fail, hitherto code left the original
buffer unmapped.
This patch fixes above situation by swapping error check of skb build with
DMA-unmap of original buffer.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Acked-by: Simon Guinot <simon.guinot@sequanux.org>
Cc: <stable@vger.kernel.org> # v4.2+
Fixes a84e328941 ("net: mvneta: fix refilling for Rx DMA buffers")
Signed-off-by: David S. Miller <davem@davemloft.net>
A value originally defined in the driver was inappropriate. Even though
the ingress was somehow working, writing MVNETA_RXQ_INTR_ENABLE_ALL_MASK
to MVNETA_INTR_ENABLE didn't make any effect, because the bits [31:16]
are reserved and read-only.
This commit updates MVNETA_RXQ_INTR_ENABLE_ALL_MASK to be compliant with
the controller's documentation.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
MVNETA_RXQ_HW_BUF_ALLOC bit which controls enabling hardware buffer
allocation was mistakenly set as BIT(1). This commit fixes the assignment.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit adds missing configuration of MBUS windows access protection
in mvneta_conf_mbus_windows function - a dedicated variable for that
purpose remained there unused since v3.8 initial mvneta support. Because
of that the register contents were inherited from the bootloader.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
After changing an interface's MTU, then bringing the interface down and
back up again, I immediately saw tons of kernel messages like below.
The reason for this bad behavior is mvneta_rxq_drop_pkts(), which calls
dma_unmap_single() on already-freed memory. So we need to switch the
order of those two operations.
[ 152.388518] BUG: Bad page state in process ifconfig pfn:1b518
[ 152.388526] page:dff3dbc0 count:0 mapcount:0 mapping: (null) index:0x0
[ 152.395178] flags: 0x200(arch_1)
[ 152.398441] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set
[ 152.398446] bad because of flags:
[ 152.398450] flags: 0x200(arch_1)
[ 152.401716] Modules linked in:
[ 152.401728] CPU: 0 PID: 1453 Comm: ifconfig Tainted: P B O 4.1.12.armada.1 #1
[ 152.401733] Hardware name: Marvell Armada 370/XP (Device Tree)
[ 152.401749] [<c0015b1c>] (unwind_backtrace) from [<c0011d8c>] (show_stack+0x10/0x14)
[ 152.401762] [<c0011d8c>] (show_stack) from [<c06aa68c>] (dump_stack+0x74/0x90)
[ 152.401772] [<c06aa68c>] (dump_stack) from [<c0096c08>] (bad_page+0xc4/0x124)
[ 152.401783] [<c0096c08>] (bad_page) from [<c0099378>] (get_page_from_freelist+0x4e4/0x644)
[ 152.401794] [<c0099378>] (get_page_from_freelist) from [<c0099620>] (__alloc_pages_nodemask+0x148/0x784)
[ 152.401805] [<c0099620>] (__alloc_pages_nodemask) from [<c00ac658>] (kmalloc_order+0x10/0x20)
[ 152.401818] [<c00ac658>] (kmalloc_order) from [<c04c6f44>] (mvneta_rx_refill+0xc4/0xe8)
[ 152.401830] [<c04c6f44>] (mvneta_rx_refill) from [<c04c96c0>] (mvneta_setup_rxqs+0x298/0x39c)
[ 152.401842] [<c04c96c0>] (mvneta_setup_rxqs) from [<c04c9904>] (mvneta_open+0x3c/0x150)
[ 152.401853] [<c04c9904>] (mvneta_open) from [<c0597764>] (__dev_open+0xac/0x124)
[ 152.401864] [<c0597764>] (__dev_open) from [<c05979e4>] (__dev_change_flags+0x8c/0x148)
[ 152.401875] [<c05979e4>] (__dev_change_flags) from [<c0597ac0>] (dev_change_flags+0x18/0x48)
[ 152.401886] [<c0597ac0>] (dev_change_flags) from [<c060d308>] (devinet_ioctl+0x620/0x6d0)
[ 152.401897] [<c060d308>] (devinet_ioctl) from [<c057d810>] (sock_ioctl+0x64/0x288)
[ 152.401908] [<c057d810>] (sock_ioctl) from [<c00dcb7c>] (do_vfs_ioctl+0x78/0x608)
[ 152.401918] [<c00dcb7c>] (do_vfs_ioctl) from [<c00dd170>] (SyS_ioctl+0x64/0x74)
[ 152.401930] [<c00dd170>] (SyS_ioctl) from [<c000f3a0>] (ret_fast_syscall+0x0/0x3c)
Signed-off-by: Justin Maggard <jmaggard@netgear.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The fixed_phy infrastructure is done in a way that is optional,
by providing 'static inline' helper functions doing nothing in
include/linux/phy_fixed.h for all its APIs. However, three out
of the four users (DSA, BCMGENET, and SYSTEMPORT) always
'select FIXED_PHY', presumably because they need that.
MVNETA is the fourth one, and if that is built-in but FIXED_PHY
is configured as a loadable module, we get a link error:
drivers/built-in.o: In function `mvneta_fixed_link_update':
fpga-mgr.c:(.text+0x33ed80): undefined reference to `fixed_phy_update_state'
Presumably this driver has the same dependency as the others,
so this patch also uses 'select' to ensure that the fixed-phy
support is built-in.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 898b2970e2 ("mvneta: implement SGMII-based in-band link state signaling")
Signed-off-by: David S. Miller <davem@davemloft.net>
for_each_available_child_of_node performs an of_node_get on each iteration, so
a break out of the loop requires an of_node_put.
A simplified version of the semantic patch that fixes this problem is as
follows (http://coccinelle.lip6.fr):
// <smpl>
@@
expression root,e;
local idexpression child;
@@
for_each_available_child_of_node(root, child) {
... when != of_node_put(child)
when != e = child
(
return child;
|
+ of_node_put(child);
? return ...;
)
...
}
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
The existing function to clear the MIB statatistics was using the
wrong address for the registers. Also, the counters would of been
cleared when the interface was brought up, not during the
probe. Fix both of these.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the ethtool statistic interface, returning the full set
of statistics which both Armada 370, 38x and Armada XP can support.
Tested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
net/ipv6/xfrm6_output.c
net/openvswitch/flow_netlink.c
net/openvswitch/vport-gre.c
net/openvswitch/vport-vxlan.c
net/openvswitch/vport.c
net/openvswitch/vport.h
The openvswitch conflicts were overlapping changes. One was
the egress tunnel info fix in 'net' and the other was the
vport ->send() op simplification in 'net-next'.
The xfrm6_output.c conflicts was also a simplification
overlapping a bug fix.
Signed-off-by: David S. Miller <davem@davemloft.net>
To prevent a race between the TX DMA engine and the CPU the writing of the
first transmit descriptor must be deferred until all following descriptors
have been updated. The network card may otherwise start transmitting before
all packet descriptors are set up correctly, which leads to data corruption
or an aborted transmit operation.
This deferral is already done in the non-TSO TX path, implement it also in
the TSO TX path.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The TX DMA engine requires that buffers with a size of 8 bytes or smaller
must be 64 bit aligned. This requirement may be violated when doing TSO,
as in this case larger skb frags can be broken up and transmitted in small
parts with then inappropriate alignment.
Fix this by checking for proper alignment before handing a buffer to the
DMA engine. If the data is misaligned realign it by copying it into the
TSO header data area.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Many drivers initialize uselessly n_priv_flags, n_stats, testinfo_len,
eedump_len & regdump_len fields in their .get_drvinfo() ethtool op.
It's not necessary as these fields is filled in ethtool_get_drvinfo().
v2: removed unused variable
v3: removed another unused variable
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On some embedded systems the EEPROM does not contain a valid MAC address.
In that case it is better to fallback to a generated mac address and
let init scripts fix the value later.
Reported-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
[Changed handcoded setup to use eth_hw_addr_random() and to save new address into HW]
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the switch to per-CPU interrupts, we lost the ability to set which
CPU was going to receive our RX interrupt, which was now only the CPU on
which the mvneta_open function was run.
We can now assign our queues to their respective CPUs, and make sure only
this CPU is going to handle our traffic.
This also paves the road to be able to change that at runtime, and later on
to support RSS.
[gregory.clement@free-electrons.com]: hardened the CPU hotplug support.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The mvneta driver allows to change the default RX queue trough the rxq_def
kernel parameter.
However, the current code doesn't allow to have any value but 0. It is
actively checked for in the driver's probe because the drivers makes a
number of assumption and takes a number of shortcuts in order to just use
that RX queue.
Remove these limitations in order to be able to specify any available
queue.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that our interrupt controller is allowing us to use per-CPU interrupts,
actually use it in the mvneta driver.
This involves obviously reworking the driver to have a CPU-local NAPI
structure, and report for incoming packet using that structure.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The CPU_MAP register is duplicated for each CPUs at different addresses,
each instance being at a different address.
However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP
registers for each registers, while the SoCs embed at most 4 CPUs.
This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS
is currently set to 16, resulting in writes to registers that are not
CPU_MAP.
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network unit")
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
net/ipv4/arp.c
The net/ipv4/arp.c conflict was one commit adding a new
local variable while another commit was deleting one.
Signed-off-by: David S. Miller <davem@davemloft.net>
of_phy_find_device() increments the phy struct device refcount, which
we need to properly balance. Add code to network drivers using this
function to ensure that the struct device refcount is correctly
balanced.
For xgene, looking back in the history, we should be able to use
of_phy_connect() with a zero flags argument for the DT case as this is
how the driver used to operate prior to de7b5b3d79 ("net: eth: xgene:
change APM X-Gene SoC platform ethernet to support ACPI").
This leaves the Cavium Thunder BGX unfixed; fixing this driver is a
complicated task, one which the maintainers need to be involved with.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>