All vector drivers now allow a BPF program to be loaded and
associated with the RX socket in the host kernel.
1. The program can be loaded as an extra kernel command line
option to any of the vector drivers.
2. The program can also be loaded as "firmware", using the
ethtool flash option. It is possible to turn this facility
on or off using a command line option.
A simplistic wrapper for generating the BPF firmware for the raw
socket driver out of a tcpdump/libpcap filter expression can be
found at: https://github.com/kot-begemot-uk/uml_vector_utilities/
Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Convert files to use SPDX header. All files are licensed under the GPLv2.
Signed-off-by: Alex Dewar <alex.dewar@gmx.co.uk>
Signed-off-by: Richard Weinberger <richard@nod.at>
With the addition of bess support which uses connection
oriented SEQPACKET sockets the vector routines can now
encounter a "remote end closed the connection" scenario.
This adds handling code to detect it in the TX path and
the legacy RX path. There is no way to detect it in the
vector RX path because that can legitimately return 0
even if the remote end has not closed the connection. As
a result the detection is delayed until the first TX
event after the close.
Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Vector RAW in UML needs to BPF filter its own MAC only
if QDISC_BYPASS has failed. If QDISC_BYPASS is successful, the
frames originated locally are not visible to readers on the
raw socket.
Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
1. Provides infrastructure for vector IO using recvmmsg/sendmmsg.
1.1. Multi-message read.
1.2. Multi-message write.
1.3. Optimized queue support for multi-packet enqueue/dequeue.
1.4. BQL/DQL support.
2. Implements transports for several transports as well support
for direct wiring of PWEs to NIC. Allows direct connection of VMs
to host, other VMs and network devices with no switch in use.
2.1. Raw socket >4 times higher PPS and 10 times higher tcp RX
than existing pcap based transport (> 4Gbit)
2.2. New tap transport using socket RX and tap xmit. Similar
performance improvements (>4Gbit)
2.3. GRE transport - direct wiring to GRE PWE
2.4. L2TPv3 transport - direct wiring to L2TPv3 PWE
3. Tuning, performance and offload related setting support via ethtool.
4. Initial BPF support - used in tap/raw to avoid software looping
5. Scatter Gather support.
6. VNET and checksum offload support for raw socket transport.
7. TSO/GSO support where applicable or available
8. Migrates all error messages to netdevice_*() and rate limits
them where needed.
Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>