This change is meant to allow us to take completion time into account when
disabling queues. Previously we were just working with hard coded values
for how long we should wait. This worked fine for the standard case where
completion timeout was operating in the 50us to 50ms range, however on
platforms that have higher completion timeout times this was resulting in
Rx queues disable messages being displayed as we weren't waiting long
enough for outstanding Rx DMA completions.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Don Buchholz <donald.buchholz@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to help reduce the time needed to shutdown the
transmit and receive paths for the device. Specifically what we now do
after this patch is disable the transmit path first at the netdev level,
and then work on disabling the Rx. This way while we are waiting on the Rx
queues to be disabled the Tx queues have an opportunity to drain out.
In addition I have dropped the 10ms timeout that was left in the ixgbe_down
function that seems to have been carried through from back in e1000 as far
as I can tell. We shouldn't need it since we don't actually disable the Tx
until much later and we have additional logic in place for verifying the Tx
queues have been disabled.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Don Buchholz <donald.buchholz@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
igb writes to doorbells to post transmit and receive descriptors;
after writing descriptors to memory but before writing to doorbells,
use dma_wmb() rather than wmb(). wmb() is more heavyweight than
necessary before doorbell writes.
On x86, this avoids SFENCEs before doorbell writes in both the
tx and rx refill paths.
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch reverts two previous applied patches to fix an issue
that appeared when using SGMII based SFP modules. In the current
state the driver will try to reset the PHY before obtaining the
phy_addr of the SGMII attached PHY. That leads to an error in
e1000_write_phy_reg_sgmii_82575. Causing the initialization to
fail:
igb: Intel(R) Gigabit Ethernet Network Driver - version 5.4.0-k
igb: Copyright (c) 2007-2014 Intel Corporation.
igb: probe of ????:??:??.? failed with error -3
The patches being reverted are:
commit 1827853354
Author: Aaron Sierra <asierra@xes-inc.com>
Date: Tue Nov 29 10:03:56 2016 -0600
igb: reset the PHY before reading the PHY ID
commit 440aeca4b9
Author: Matwey V Kornilov <matwey@sai.msu.ru>
Date: Thu Nov 24 13:32:48 2016 +0300
igb: Explicitly select page 0 at initialization
The first reverted patch directly causes the problem mentioned above.
In case of SGMII the phy_addr is not known at this point and will
only be obtained by 'igb_get_phy_id_82575' further down in the code.
The second removed patch selects forces selection of page 0 in the
PHY. Something that the reset tries to address as well.
As pointed out by Alexander Duzck, the patch below fixes the same
issue but in the proper location:
commit 4e684f59d7
Author: Chris J Arges <christopherarges@gmail.com>
Date: Wed Nov 2 09:13:42 2016 -0500
igb: Workaround for igb i210 firmware issue
Reverts: 440aeca4b9.
Reverts: 1827853354.
Signed-off-by: Christian Grönke <c.groenke@infodas.de>
Reviewed-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the ixgbe's security configuration registers into
the register dump.
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
XDP does not support jumbo frames or LRO. These checks are being made
outside the driver when an XDP program is loaded, however, there is
nothing preventing these from changing after an XDP program is loaded.
Add the checks so that while an XDP program is loaded, do not allow MTU
to be changed or LRO to be enabled.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use array_size() and store the size as full size_t to protect from
theoretical size overflow when handling HW descriptor rings.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 7f1c684a89 ("nfp: setup xdp_rxq_info") mixed the cache
cold and cache hot data in the nfp_net_rx_ring structure (ignoring
the feedback), to try to fit the structure into 2 cache lines
after struct xdp_rxq_info was added. Now that we are about to add
a new field the structure will grow back to 3 cache lines, so
order the members correctly.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use kvcalloc() instead of tmp variable + kzalloc() when allocating
SW buffer information to allow falling back to vmalloc and to protect
from theoretical integer overflow.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On machines with buggy ACPI tables or when SR-IOV is already enabled
we may not be able to set the SR-IOV VF limit in sysfs, it's not fatal
because the limit is imposed by the driver anyway. Only the sysfs
'sriov_totalvfs' attribute will be too high. Print an error to inform
user about the failure but allow probe to continue.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The dma_mapping_error() returns true or false, but we want
to return -ENOMEM if there was an error.
Fixes: 174fd2597b ("amd-xgbe: Implement split header receive support")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that all the pieces are in place we can start using the A-TCAM
instead of only using the C-TCAM. This allows for much higher scale and
better performance (to be improved further by follow-up patch sets).
Perform the integration with the A-TCAM and the eRP core by reverting
the changes introduced by "mlxsw: spectrum_acl: Enable C-TCAM only mode
in eRP core" and add calls from the C-TCAM code into the eRP core.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement rule insertion and deletion into the A-TCAM before we flip the
driver to start using the A-TCAM.
Rule insertion into the A-TCAM is very similar to C-TCAM, but there are
subtle differences between regions of different sizes (i.e., different
number of key blocks).
Specifically, as explained in "mlxsw: spectrum_acl: Allow encoding a
partial key", in 12 key blocks regions a rule is split into two and the
two halves of the rule are linked using a "large entry key ID".
Such differences are abstracted away by using different region
operations per region type.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When A-TCAM will be used together with C-TCAM, the C-TCAM code will need
to call into the eRP core in order to get an eRP for an inserted entry.
The eRP core takes an A-TCAM region as one of its arguments, so pass the
C-TCAM region to the insertion function which will later allow us to
derive the A-TCAM region, given it contains the C-TCAM one.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before we start using the A-TCAM we need to make sure the region is
properly initialized.
This includes the setting of its type (which affects the size of its eRP
table, for example) and its registration with the eRP core.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Each TCAM region currently uses its own resources and there is no
sharing between the different regions.
This is going to change with A-TCAM as each region will need to allocate
an eRP table from the global eRP tables array.
Make the global TCAM resources available to each region by passing the
TCAM private data to the region initialization routine.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In Spectrum-2 the C-TCAM is only used for rules that can't fit in the
A-TCAM due to a limited number of masks per A-TCAM region.
In addition, rules inserted into the C-TCAM may affect rules residing in
the A-TCAM, by clearing their C-TCAM prune bit.
The two regions are thus closely related and can be thought of as if the
C-TCAM region is encapsulated in the A-TCAM one.
Change the data structures to reflect that before introducing A-TCAM
support and make C-TCAM region initialization part of the A-TCAM region
initialization sequence.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Initialize the A-TCAM as part of the driver's initialization routine.
Specifically, initialize the eRP tables so that A-TCAM regions will be
able to perform allocations of eRP tables upon rule insertion in
subsequent patches.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When working with 12 key blocks in the A-TCAM, rules are split into two
records, which constitute two lookups. The two records are linked using
a "large entry key ID". The ID is assigned to key blocks 6 to 11 and
resolved during the first lookup. The second lookup is performed using
the ID and the remaining key blocks.
Allow encoding a partial key so that it can be later used to check if an
ID can be reused.
This is done by adding two arguments to the existing encode function
that specify the range of the block indexes we would like to encode. The
key and mask arguments become optional, as we will not need to encode
both of them all the time.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In a similar fashion to Spectrum-1's region struct, Spectrum-2's struct
needs to store a pointer to the common region struct.
The pointer will be used in follow-up patches that implement rules
insertion and deletion.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The number of eRPs that can be used by a single A-TCAM region is limited
to 16. When more eRPs are needed, an ordinary circuit TCAM (C-TCAM) can
be used to hold the extra eRPs.
Unlike the A-TCAM, only a single (last) lookup is performed in the
C-TCAM and not a lookup per-eRP. However, modeling the C-TCAM as extra
eRPs will allow us to easily introduce support for pruning in a
follow-up patch set and is also logically correct.
The following diagram depicts the relation between both TCAMs:
C-TCAM
+-------------------+ +--------------------+ +-----------+
| | | | | |
| eRP #1 (A-TCAM) +----> ... +----+ eRP #16 (A-TCAM) +----+ eRP #17 |
| | | | | ... |
+-------------------+ +--------------------+ | eRP #N |
| |
+-----------+
Lookup order is from left to right.
Extend the eRP core APIs with a C-TCAM parameter which indicates whether
the requested eRP is to be used with the C-TCAM or not.
Since the C-TCAM is only meant to absorb rules that can't fit in the
A-TCAM due to exceeded number of eRPs or key collision, an error is
returned when a C-TCAM eRP needs to be created when the eRP state
machine is in its initial state (i.e., 'no masks'). This should only
happen in the face of very unlikely errors when trying to push rules
into the A-TCAM.
In order not to perform unnecessary lookups, the eRP core will only
enable a C-TCAM lookup for a given region if it knows there are C-TCAM
eRPs present.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, no calls are performed into the eRP core, but in order to
make review easier we would like to gradually add these calls.
Have the eRP core initialize a region's master mask to all ones and
allow it to use an empty eRP table. This directs the lookup to the
C-TCAM and allows the C-TCAM only mode to continue working.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When rules are inserted into the A-TCAM they are associated with a mask,
which is part of the lookup key: { masked key, mask ID, region ID }.
These masks are called rule patterns (RP) and the aggregation of several
masks into one (to be introduced in follow-up patch sets) is called an
extended RP (eRP).
When a packet undergoes a lookup in an ACL region it is masked by the
current set of eRPs used by the region, looking for an exact match.
Eventually, the rule with the highest priority is picked.
These eRPs are stored in several global banks to allow for lookup to
occur using several eRPs simultaneously.
At first, an ACL region will only require a single mask - upon the
insertion of the first rule. In this case, the region can use the
"master RP" which is composed by OR-ing all the masks used by the
region. This mask is a property of the region and thus there is no need
to use the above mentioned banks.
At some point, a second mask will be needed. In this case, the region
will need to allocate an eRP table from the above mentioned banks and
insert its masks there.
>From now on, upon lookup, the eRP table used by the region will be
fetched from the eRP banks - using {eRP bank, Index within the bank} -
and the eRPs present in the table will be used to mask the packet. Note
that masks with consecutive indexes are inserted into consecutive banks.
When rules are deleted and a region only needs a single mask once again
it can free its eRP table and use the master RP.
The above logic is implemented in the eRP core and represented using the
following state machine:
+------------+ create mask - as master RP +---------------+
| +--------------------------------> |
| no masks | | single mask |
| <--------------------------------+ |
+------------+ delete mask +-----+--^------+
| |
| |
create mask - | | delete mask -
create mask transition to use eRP | | transition to
+--------+ table | | use master RP
| | | |
| | | |
+----v--------+----+ create mask +----v--+-----+
| <-------------------------------+ |
| multiple masks | | two masks |
| +-------------------------------> |
+------------------+ delete mask - if two +-------------+
remaining
The code that actually configures rules in the A-TCAM will interface
with the eRP core by getting or putting an eRP based on the required
mask used by the rule.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the following resources to be used by A-TCAM code:
* Maximum number of eRP banks
* Maximum size of eRP bank
* Number of eRP entries required for a 2/4/8/12 key blocks mask
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a resource to make sure we do not exceed the maximum number of
supported large key IDs in a region.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The register is used to add and delete eRPs from the eRP table.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The register is used to configure rules in the A-TCAM.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before introducing A-TCAM support we need to make sure all the necessary
fields are configurable and not hard coded to values that worked for the
C-TCAM only use case.
This includes - for example - the ability to configure the eRP table
used by the TCAM region.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes the following sparse warning:
drivers/net/ethernet/microchip/lan743x_main.c:2944:25: warning:
symbol 'lan743x_pm_ops' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Bryan Whitehead <Bryan.Whitehead@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow obtaining MTTs starting at any index,
thus give a better cache utilization.
For this, allow setting log_mtts_per_seg to 0, and use
this in default.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Anaty Rahamim Bar Kat <anaty@mellanox.com>
Reviewed-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Enable offloading of TC matching on tos/ttl for ipv4/6 tunnels.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use the values provided by user-space for the encapsulation headers.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currnetly, the ttl for the encapsulation headers is taken from the
route lookup result. As a pre-step to allow for an offload case when
the user specifies the ttl, take it from the route lookup only if
not zero. While here, also move to use u8 instead int for the ttl.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The vxge driver doesn't need anything provided by pci_hotplug.h, so remove
the unnecessary include of it.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a helper for checking whether polling is used to detect PHY status
changes.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use generic kernel CRC32 implementation because it:
1. Should be faster (uses lookup tables),
2. Removes duplicated CRC generation code,
3. Uses well-proven algorithm instead of coding it one more time.
Suggested-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use generic kernel CRC32 implementation because it:
1. Should be faster (uses lookup tables),
2. Removes duplicated CRC generation code,
3. Uses well-proven algorithm instead of coding it one more time.
Suggested-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
genphy_config_aneg() should be called only by PHYs that implement
the Clause 22 register set. Prevent Clause 45 PHYs that don't implement
the register set from calling the genphy function.
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
So we can infer the number of VM-Exits.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add counters below:
* Tx
- xdp_tx: frames sent by ndo_xdp_xmit or XDP_TX.
- xdp_tx_drops: dropped frames out of xdp_tx ones.
* Rx
- xdp_packets: frames went through xdp program.
- xdp_tx: XDP_TX frames.
- xdp_redirects: XDP_REDIRECT frames.
- xdp_drops: any dropped frames out of xdp_packets ones.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make sure to use the same logic in all places to determine xdp sq. This
is useful for xdp counters which the following commit will introduce as
well.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since when XDP was introduced, drop counter has been able to be updated
much more frequently than before, as XDP_DROP increments the counter.
Thus for performance analysis per-queue drop counter would be useful.
Also this avoids cache contention and race on updating the counter. It
is currently racy because napi handlers read-modify-write it without any
locks.
There are more counters in dev->stats that are racy, but I left them
per-device, because they are rarely updated and does not worth being
per-queue counters IMHO. To fix them we need atomic ops or some kind of
locks.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
The purpose is to keep receive_buf arguments simple when more per-queue
counter items are added later.
Also XDP_TX related sq counters will be updated in the following changes
so create a container struct virtnet_rx_stats which will includes both
rq and sq statistics. For now it only covers rq stats.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
When received packets are dropped in virtio_net driver, received packets
counter is incremented but bytes counter is not.
As a result, for instance if we drop all packets by XDP, only received
is counted and bytes stays 0, which looks inconsistent.
IMHO received packets/bytes should be counted if packets are produced by
the hypervisor, like what common NICs on physical machines are doing.
So fix the bytes counter.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rx hash/filter table configuration uses rss_conf_obj to configure filters
in the hardware. This object is initialized only when the interface is
brought up.
This patch adds driver changes to configure rss params only when the device
is in opened state. In port disabled case, the config will be cached in the
driver structure which will be applied in the successive load path.
Please consider applying it to 'net' branch.
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Function mlx4_RST2INIT_QP_wrapper saved the qp number passed in the qp
context, rather than the one passed in the input modifier.
However, the qp number in the qp context is not defined as a
required parameter by the FW. Therefore, drivers may choose to not
specify the qp number in the qp context for the reset-to-init transition.
Thus, we must save the qp number passed in the command input modifier --
which is always present. (This saved qp number is used as the input
modifier for command 2RST_QP when a slave's qp's are destroyed).
Fixes: c82e9aa0a8 ("mlx4_core: resource tracking for HCA resources used by guests")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Certain PHY's have issues when operating in GBit slave mode and can
be forced to master mode. Examples are RTL8211C, also the Micrel PHY
driver has a DT setting to force master mode.
If two such chips are link partners the autonegotiation will fail.
Standard defines a self-clearing on read, latched-high bit to
indicate this error. Check this bit to inform the user.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>