Add support for extended command id in triggers handling.
Extended command id header contains group id in addition to command id.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Add support for extended firmware event header that contains
a group id as well as the command id.
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Add support for extended command id in notification system.
Extended command id header contains group id in addition to command id.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This makes various functions in the file rs.c void due to these
functions never returning a error code to signal to their callers
if and how they have failed to complete their intended work.
Signed-off-by: Nicholas Krause <xerofoify@gmail.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Move the TX PN assignment (for CCMP only) to the driver. This prepares
the driver for future DSO (driver segmentation offload) where it will
split an SKB into multiple MPDUs by itself.
For TDLS, split out the CCMP TX command handling so that it won't get
a PN assigned, the firmware assigns the PN in that case.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Since the CSR_DRAM_INIT_TBL_WRITE_POINTER bit wasn't set
on ict reset, in some flows (like disable ict followed by
immediate reset ict) the driver and hardware went out
of sync (the driver cleared the ict_index, while the hw
kept it intact).
Fix it by setting the flag when resetting ict.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Some CSR registers have to be configured also
in case of suspend/resume with unified image
(which doesn't includes reconfiguration flow).
Reuse the existing d3_suspend/d3_resume trans ops,
while making sure some configurations are a bit
different, according to the wowlan type.
After this change, we no longer need the special
wowlan_d0i3 configurations done in iwl_pci_resume,
as they are already being done in the d3_resume op.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
CQM overwrites a few thresholds in the bf command. On the other hand,
when entering D0i3 the thresholds are set to higher values on purpose,
so ignore CQM in this case.
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The slow filtering threshold should be higher in D0i3 case.
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
TCP software implementation on the host requires extensive computing
power. Offloading even some of the TCP/IP stack to the NIC might save
a significant overhead. In order to enable this feature on our hw,
we need to configure it first. Once done, we mark this capability,
to be advertised later to the OS via ieee80211_register_hw.
The driver Rx indications for TCP Checksum is integrated within the
standard Rx status. The driver responds to those indications as follows:
If the frame was tested by hw and checksum ok report CHECKSUM_UNNECESSARY.
Otherwise, report CHECKSUM_NONE.
Signed-off-by: Avri Altman <avri.altman@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The firmware debug infrastructure allows the user to
provide a firmware that will toggle a few registers to
configure the debugging capabilities.
On certain devices, certain operations are forbidden.
Executing a forbidden operation will cause the hardware to
die in a way that only driver unload / load will bring it
back to life.
Fortunately, there is a way to know in advance if those
operations will be accepted by the device. This is where
the new PRPH_BLOCKBIT operation plays its role. If the bit
X from PRPH register Y is set, then we should prevent any
further register configuration. When that happens, drop a
line in the kernel log since this is really an error state:
the user won't have his device configured as he expected.
Add operations that will be used in the future:
INDIRECT_ASSIGN, INDIRECT_SETBIT, and INDIRECT_CLEARBIT.
Other debugging configurations (such as destination
configuration for the monitor) will take place in any case.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In iwl_mvm_tx_skb_non_sta(), in case of managed interface,
use the AP station for multicast frames instead of the auxiliary
station as otherwise the frames can be sent to an absent P2P GO as
the FW does not block transmissions for the auxiliary station
since it is not associated with the station MAC context.
Note that this is not possible for unicast frames, as a TDLS
discovery response is sent without a station entry, and in this
case the P2P GO NoA should not block transmission to the peer.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Allow the transport layer to return an error upon suspend.
Signed-off-by: Eliad Peller <eliadx.peller@intel.com>
Reviewed-by: Luciano Coelho <luciano.coelho@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This reverts commit 088070a2f6.
When working in d0i3_on_idle mode, we explicitly go out
of d0i3 on resume (so other potential commands could
be sent).
However, D0I3_DEFER_WAKEUP is currently cleared on
resume complete (which happens only later on), causing
d0i3 exit to timeout.
Since mac80211 was modified to accept incoming frames
once drv_resume was called, we can safely revert this
patch, and handle the pending work on iwl_mvm_resume().
Signed-off-by: Eliad Peller <eliadx.peller@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Existing UMAC commands already use the long header, but are sent
with group 0 and the long header inserted manually. Move them to
the group 1 to take advantage of the header building in the low-
level transport.
Existing firmware ignores the group_id field (it's reserved) and
the first firmware that really supports long command headers can
parse all commands in both group 0 (with short header) and group
1 (with long header.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
As the firmware is slowly running out of command IDs and grouping of
commands is desirable anyway, the firmware is extending the command
header from 4 bytes to 8 bytes to introduce a group (in place of the
former flags field, since that's always 0 on commands and thus can
be easily used to distinguish between the two.
In order to support this most easily in the driver widen the command
command ID used in the command sending functions and encode the new
values (group and version) in the ID. That way existing code doesn't
have to be changed (since the higher bits are 0 automatically) and
newer code can easily use the new ID generation function to create a
value to use in place of just the command ID.
Signed-off-by: Aviya Erenfeld <aviya.erenfeld@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
ToF is a time based method for measurement of the WiFi device
location within a WiFi environment. The driver functionality provided
by this patch is the interface for communication with FW and receiving
location related updates from the FW. The interface provided by this
patch is via debugfs.
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
All the supported firmwares support this API.
This includes removing dwell per band, as band is no longer a factor
in calculating the dwell. Only basic dwell is used and FW will calculate
the actual dwell time.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The 'flags' field really has been reserved in the firmware API for a
very long time, probably since 4965. As a consequence, the field is
always 0 and checking for a IWL_CMD_FAILED_MSK flag makes no sense.
Rename the field to 'reserved', get rid of IWL_CMD_FAILED_MSK and
all the code for it.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In iwlmvm firmwares, the Byte count written in the scheduler
byte count table is in DWORDs and not in bytes.
We should check that this value fits in the 12 bits and
the value can be either in bits of in DWORD or bytes
depending on the firmware. Check the value after the
translation to DWORDs is done (if needed).
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In a few places, we were disabling interrupts but didn't
make sure that the interrupt handler has finished running.
Add calls to synchronize_irq() to ensure we finish handling
the interrupts before we free resources or other things that
could lead to a crash if the interrupt were to be handled
later.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When the firmware crashes, we can't expect the Tx queues to
progress. Cancel their timer.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Since the time-event is sent with the immediate flag set, there is
no need to sample the device time.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
With the previous patch series, no opmode continues using the
command or handler_status (i.e. the return value from the RX)
so it can be removed now.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In the mvm driver, neither the old command nor the return value
are used, so remove them.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
After the previous patches, the command that's passed in nor the
return value are used any more, so can be removed.
While at it, make some functions static.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This makes the logging a little less useful, but as they're mostly
synchronous commands it won't matter much. It gets rid of the
dependency on the input command, which this is the only user of.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This driver currently has some very confusing ADD_STA response handling
that runs asynchronously in the background for all of the commands, but
is only really necessary for synchronous ones (the really asynchronous
ones can only be done for already existing stations), and for the sync
ones it actually waits for the RX handler to return a status code.
Rework this to keep the debug printing in the handler, but do the code
that's supposed to have an effect only for sync commands in the command
sending function.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The current key offset assignment algorithm always uses the lowest
unused key offset, which will potentially lead to issues when the
firmware will change to take the key material for TX from the key
table rather than from the TX command.
In order to avoid those issues (and avoid forgetting about them)
change the key offset allocation algorithm now to avoid reusing key
offsets quickly.
The new algorithm always picks as the next offset the least recently
freed offset, i.e. the offset that has been unused for the longest
amount of time. This is implemented by having a generation counter
for each key offset that is incremented every time a key is deleted,
except for the one that's deleted, which is reset to zero. Thus the
highest counter is the key that's been unused longest.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
During NIC initialization shared HW is reset and this disables the
scheduler. Some HW platforms do not activate the scheduler after it.
Consequently all HCMD sent by the driver stay at the queues which cause
to queue stuck.
Set the scheduler to work on auto active mode so it would be activated upon
change over one of the queues' write pointer.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This allows to ensure that we don't have races between them.
A user reported that stop_device was called twice upon
rfkill interrupt after suspend. When the interrupts are
enabled, and right after when we directly check the rfkill
state.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The new locking in PCIe transport requires to start_hw
before start_fw. This uncovered a bug in dvm which failed
to do so.
Fix that.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
This firmware is not supported anymore - stop loading this firmware.
Remove code handling older versions.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
There's no need to forward RX MPDUs to notification wait tests, nor
do we need to check them for firmware dump triggers, nor could they
be asynchronous. It's thus more efficient to handle them separately,
before going into the regular RX handlers.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In scenarios where we haven't converged yet to a specific modulation
and rate it could be better to report to userspace the last tx rate
based on the STA capabilities and RSSI. This is important as sometimes
userspace displays the last tx rate as the link speed.
This avoids being presented with low legacy rates when rs just begins
its search or after an idle period in which it resets itself.
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Pull networking updates from David Miller:
1) Add TX fast path in mac80211, from Johannes Berg.
2) Add TSO/GRO support to ibmveth, from Thomas Falcon
3) Move away from cached routes in ipv6, just like ipv4, from Martin
KaFai Lau.
4) Lots of new rhashtable tests, from Thomas Graf.
5) Run ingress qdisc lockless, from Alexei Starovoitov.
6) Allow servers to fetch TCP packet headers for SYN packets of new
connections, for fingerprinting. From Eric Dumazet.
7) Add mode parameter to pktgen, for testing receive. From Alexei
Starovoitov.
8) Cache access optimizations via simplifications of build_skb(), from
Alexander Duyck.
9) Move page frag allocator under mm/, also from Alexander.
10) Add xmit_more support to hv_netvsc, from KY Srinivasan.
11) Add a counter guard in case we try to perform endless reclassify
loops in the packet scheduler.
12) Extern flow dissector to be programmable and use it in new "Flower"
classifier. From Jiri Pirko.
13) AF_PACKET fanout rollover fixes, performance improvements, and new
statistics. From Willem de Bruijn.
14) Add netdev driver for GENEVE tunnels, from John W Linville.
15) Add ingress netfilter hooks and filtering, from Pablo Neira Ayuso.
16) Fix handling of epoll edge triggers in TCP, from Eric Dumazet.
17) Add an ECN retry fallback for the initial TCP handshake, from Daniel
Borkmann.
18) Add tail call support to BPF, from Alexei Starovoitov.
19) Add several pktgen helper scripts, from Jesper Dangaard Brouer.
20) Add zerocopy support to AF_UNIX, from Hannes Frederic Sowa.
21) Favor even port numbers for allocation to connect() requests, and
odd port numbers for bind(0), in an effort to help avoid
ip_local_port_range exhaustion. From Eric Dumazet.
22) Add Cavium ThunderX driver, from Sunil Goutham.
23) Allow bpf programs to access skb_iif and dev->ifindex SKB metadata,
from Alexei Starovoitov.
24) Add support for T6 chips in cxgb4vf driver, from Hariprasad Shenai.
25) Double TCP Small Queues default to 256K to accomodate situations
like the XEN driver and wireless aggregation. From Wei Liu.
26) Add more entropy inputs to flow dissector, from Tom Herbert.
27) Add CDG congestion control algorithm to TCP, from Kenneth Klette
Jonassen.
28) Convert ipset over to RCU locking, from Jozsef Kadlecsik.
29) Track and act upon link status of ipv4 route nexthops, from Andy
Gospodarek.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1670 commits)
bridge: vlan: flush the dynamically learned entries on port vlan delete
bridge: multicast: add a comment to br_port_state_selection about blocking state
net: inet_diag: export IPV6_V6ONLY sockopt
stmmac: troubleshoot unexpected bits in des0 & des1
net: ipv4 sysctl option to ignore routes when nexthop link is down
net: track link-status of ipv4 nexthops
net: switchdev: ignore unsupported bridge flags
net: Cavium: Fix MAC address setting in shutdown state
drivers: net: xgene: fix for ACPI support without ACPI
ip: report the original address of ICMP messages
net/mlx5e: Prefetch skb data on RX
net/mlx5e: Pop cq outside mlx5e_get_cqe
net/mlx5e: Remove mlx5e_cq.sqrq back-pointer
net/mlx5e: Remove extra spaces
net/mlx5e: Avoid TX CQE generation if more xmit packets expected
net/mlx5e: Avoid redundant dev_kfree_skb() upon NOP completion
net/mlx5e: Remove re-assignment of wq type in mlx5e_enable_rq()
net/mlx5e: Use skb_shinfo(skb)->gso_segs rather than counting them
net/mlx5e: Static mapping of netdev priv resources to/from netdev TX queues
net/mlx4_en: Use HW counters for rx/tx bytes/packets in PF device
...
Conflicts:
drivers/net/ethernet/mellanox/mlx4/main.c
net/packet/af_packet.c
Both conflicts were cases of simple overlapping changes.
Signed-off-by: David S. Miller <davem@davemloft.net>
Current implementation of descriptor init procedure only takes
care about setting/clearing ownership flag in "des0"/"des1"
fields while it is perfectly possible to get unexpected bits
set because of the following factors:
[1] On driver probe underlying memory allocated with
dma_alloc_coherent() might not be zeroed and so
it will be filled with garbage.
[2] During driver operation some bits could be set by SD/MMC
controller (for example error flags etc).
And unexpected and/or randomly set flags in "des0"/"des1"
fields may lead to unpredictable behavior of GMAC DMA block.
This change addresses both items above with:
[1] Use of dma_zalloc_coherent() instead of simple
dma_alloc_coherent() to make sure allocated memory is
zeroed. That shouldn't affect performance because
this allocation only happens once on driver probe.
[2] Do explicit zeroing of both "des0" and "des1" fields
of all buffer descriptors during initialization of
DMA transfer.
And while at it fixed identation of dma_free_coherent()
counterpart as well.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: arc-linux-dev@synopsys.com
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Cc: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This bug pops up with NetworkManager on Fedora 21. NetworkManager tends to
stop the interface (nicvf_stop() is called) before changing settings. In
stopped state MAC cannot be sent to a PF. However, when the interface is
restarted (nicvf_open() is called), we ping the PF using NIC_MBOX_MSG_READY
message, and the PF replies back with old MAC address, overriding what we
had after MAC setting from userspace. As a result, we cannot set MAC
address using NetworkManager.
This patch introduces special tracking of MAC change in stopped state so
that the correct new MAC address is sent to a PF when interface is reopen.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prefetch the 1st cache line used by the buffer pointed by
the skb linear data.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Separate between mlx5e_get_cqe() and mlx5_cqwq_pop(), this helps for
better code readability and better CQ buffer management.
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use container_of() instead.
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Coding Style fix, remove extra spaces.
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to save PCI BW consumed by TX CQEs and to reduce the amount of
CPU cache misses caused by TX CQE reading, we request TX CQE generation
only when skb->xmit_more=0.
As a consequence of the above, a single TX CQE may now indicate the
transmission completion of multiple TX SKBs.
This also handles a problem introduced in commit b1b8105ebf41 "net/mlx5e:
Support NETIF_F_SG" where we didn't ask for NOP completions while the
driver didn't have the proper code to handle this case.
Fixes: b1b8105ebf41 ('net/mlx5e: Support NETIF_F_SG')
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
NOP completion SKBs are always NULL.
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It is already assigned at mlx5e_build_rq_param()
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of counting number of gso fragments, we can use
skb_shinfo(skb)->gso_segs.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To save per-packet calculations, we use the following static mappings:
1) priv {channel, tc} to netdev txq (used @mlx5e_selec_queue())
2) netdev txq to priv sq (used @mlx5e_xmit())
Thanks to these static mappings, no more need for a separate implementation
of ndo_start_xmit when multiple TCs are configured.
We believe the performance improvement of such separation would be negligible, if any.
The previous way of dynamically calculating the above mappings required
allocating more TX queues than actually used (@alloc_etherdev_mqs()),
which is now no longer needed.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>