Tactical Data Link Simulation: Realistic Network Emulation

Bridging Simulation and Reality in Modern Warfare The digital age has transformed the battlefield, elevating information to a cornerstone of military operations. In the heart of this transformation is Link 16, a tactical data link that serves as the nervous system for the U.S & NATO, facilitating the exchange of real-time tactical data across air, sea, land platforms and currently being tested in space. However, the complex interplay of communications facilitated by Link 16, powered by the precision of Time Division Multiple Access (TDMA), poses a unique challenge — the replication of its protocol in simulated environments for testing applications and preparing them for real-world environments. The Challenge of Fidelity in Simulation The intensity of military operations demands more than just robust systems; it calls for a guarantee that these systems and applications will perform under the most challenging conditions. Traditional testing environments often fail to capture the full spectrum of complexities presented by TDMA networks, leading to a gap between simulated exercises and actual field experiences. This gap not only impacts operational readiness but also impedes the strategic advantage forces rely on during combat. Furthermore, reliance on expensive field trials that cannot fully control or replicate real-world conditions poses significant risks. Live exercises are subject to the unpredictability of the day, offering limited repeatability. In contrast, simulated environments provide controlled, consistent conditions, allowing for detailed analysis and the opportunity to test systems against a variety of tailored scenarios — a critical aspect in preparing for the complexity and uncertainties of real-world operations. A Leap Forward with TDMA Mesh Network Emulation In a collaborative initiative, Antillion and Calnex Solutions have taken a decisive step towards bridging this gap within the SERAPIS Framework Agreement, supporting DSTL’s mission to deliver cutting-edge capabilities. By developing a new TDMA Mesh feature into the Calnex NE-ONE Network Emulator, they have crafted a simulation environment that echoes the fidelity of real-world TDMA protocols. This advancement enables for rigorous evaluation and refining processes, network policies, and protocols within simulated Denied, Degraded, Intermittent, and Limited (DDIL) network environments, ensuring that every strategic nuance is understood, and every contingency is accounted for. Real-World Network Emulation as a Strategic Imperative The NE-ONE’s TDMA Mesh feature marks a significant advancement in its capability for defence. It showcases the advantages of software-defined implementations that allow for a meticulous recreation of transmission patterns, latency, jitter, and loss characteristic of TDMA Mesh networks — without the need for a physical radio environment. By leveraging the NE-ONE, defense and government organizations can conduct thousands of controllable and repeatable networked application readiness tests, not only mitigating risks but also significantly reducing the cost and time associated with traditional field trials. Learn More in our Upcoming Webinar! If you would like to understand more about the Calnex NE-ONE TDMA Mesh feature, sign up to our up-coming Live Webinar taking place on Wednesday, November 30,2023.  Read more about Calnex in Defense: Modeling Military Networks

Bringing Network Realism to Military Training Environments

Networks are a critical component in modern warfare and their impact on the war fighter’s effectiveness cannot be overstated. And here we are not just talking about their criticality in frontline combat situations but also for logistics and wider intelligence gathering. Therefore, it is vital to include the network environment in your simulation or training set up.Today’s armed forces have access to an unprecedented array of communications pathways- SATcom networks, MANET (Mobile Ad-hoc Networks), Tactical Data Links, Radio, Military Broadband etc. all helping to deliver ever-increasing quantities of data and intelligence. That is until atmospheric conditions, local terrain, demand from competing sources, or the enemy, all play their part in slowing down or interrupting access to mission-critical data. Understanding and Coping with Poor Networks In our civilian world we have grown used to instant access to data and, on those occasions when we encounter slow application response times, it is quite remarkable how quickly we become frustrated, anxious or even angry. Now, just imagine applying these same restrictions into a real-life, already stressful combat situation and see how personnel react. They will blame the kit; they will blame themselves; they will blame those higher up the chain of command. Rational thought processing will diminish and panic could ensue.And the reason for doing this is because, during training they were not subjected to scenarios where the network conditionsimpact data transmission and device performance. This might lead them to believe that something is wrong (when it could just be normal) and therefore impede effective decision making. However, just as a flight simulator can be used to train pilots how to react in emergency situations, until their responses almost become second nature, if we incorporate a dynamic,evolving yet controllable network environment into the training platform, we can help our war fighters to prepare themselves for the realities and challenges their communications and data delivery systems may present them in a real conflict situation and provide strategies to minimize stress levels and mistakes. A ‘Flight Simulator’ for Networks Calnex NE-ONE Network Emulators, with their ability to simulate and manipulate network conditions and environments, can be easily incorporated into your simulation or training platform to deliver the realism you need to provide the most real-world training experience possible. All combinations of land/air/sea/space topologies can be simulated with this technology, together with meshed or partially meshed and convoy networks. This includes SATCOM hops, DSL/Mobile network connections, SD-WAN and international WAN environments to create an accurate working model of real-world, international, multi-domain networks, all available and under your complete control within your training environment. Different network impairments such as restricted bandwidth, latency, data packet loss, packet reordering and data packet erroring and more can all be introduced into the training environment to simulate atmospheric conditions, local terrain or enemy jamming. And, with the growing use of Low Earth Orbit (LEO) satellite constellations it is possible to familiarize the war fighter with behavior of this technology as well. Deployment in Cyber Ranges Another use for this type of technology is in the Cyber Range, where applications and devices can be subjected to various cyber attacks, and defenders can try to defend against these. As these can originate from any location worldwide, through any available method, it’s vital to simulate the correct network conditions, otherwise whether attacking or defending, the operator may believe that something is not working correctly simply due to cyber range network conditions not matching the real-world conditions. Delivering a truly holistic training experience Advances in computer graphics and the introduction of immersive Virtual Reality technologies, have helped the modern training simulator get incredibly close to being like the real thing, making it a vital resource for any military organization. However, we believe that to deliver a truly holistic training experience, the network environment must be factored in as we owe it to our war fighters to ensure they are fully prepared to operate in the desired disadvantaged and disrupted network conditions likely to be encountered in-theater. Want to Find Out More? If you would like to understand more about how to add Network Realism to the Training Environment, we would be happy to arrange an online demonstration. Learn more: Modeling Military Networks

What is Jitter and should we care about it?

This blog is Part 2 of a 3 part blog and concentrates on jitter (variable latency). Part 1 dealt with distance, latencies and orbits and subsequent parts will discuss the effect of atmospheric conditions and choice of wavebands. First, it’s worth noting that while the term jitter is used by network specialists and certain application performance engineers, it isn’t really a network term at all – it’s a communication engineer’s term – and essentially refers to the difference between when a timing signal should have been received and when it was actually received. So, in an ideal world, if you transmit a signal (down a wire or as a wave) 1 Million times a second (1Mhz) at even spacing then that’s what you expect to receive: 1 pulse at exactly every microsecond (millionth of a second). That’s not to say that all the pulses might not all be delayed (perhaps due to distance), but the expectation is that they are delayed by the same amount and so the jitter is 0. Unfortunately real life is not like that and signals can arrive relatively too early, or too late.  This is jitter. Packet Delay Variation (PDV) In networks, and application engineering, data is typically grouped together and transmitted as packets. So the correct term (for packet type jitter) is Packet Delay Variation (PDV) but we’ll use the term Jitter to mean the same thing i.e. PDV. What is affected by Jitter? Let’s begin with what isn’t really affected by Jitter – Connection orientated applications Now, jitter doesn’t matter to most “standard” applications i.e. applications based on the TCP part of the IP protocol family.  These include most “transactional” and “file transferring” applications like: ●    Web – http and https●    Network file systems – CIFS (NetBIOS over TCP), NFS●    File transfer – ftp, sftp ●    Custom TCP communications – various messengers, apps●    Video/Audio Services – Netflix, Internet radio (* we’ll come back to this one)   The reason for this is that they are not especially time-sensitive and are often themselves waiting for acknowledgement of successful packet delivery before they can transmit more data i.e.  they are inherently jittery in themselves. STREAMING APPLICATIONS/APPS The jitter problem typically all starts when you are trying to stream live audio or video, a telephone call, live telemetry or timing protocols over a network. As mentioned, the network doesn’t send a bit stream, rather it generally sends it as a packet stream (the packet being the basic unit of transmission in most modern data networks) with a regular time gap between packets. So, you try to send a packet containing audio samples (for example), say 1000 times a second so that the playback system can play them back as sounds, but they don’t arrive with that spacing due to jitter (PDV) and so the played sound is all over the place. Wait, you say, that doesn’t actually happen in real life.  No indeed, because two primary solutions to this have been adopted: 1. The application is not actually real-time!A realization that the stream does not actually need to be real time e.g. internet radio, Netflix (see I said we’d be back here!) as you can buffer (receive in advance) a large chunk of data (20 seconds for example) and play the samples back evenly spaced, which (as discussed in the box out) will likely even have more than one sample in each packet. You can do this because you know the encoding/decoding system (codec) and its data rate.  You also know that it doesn’t matter if one consumer hears/sees the station/program a little later than another, in general. 2. It is real-time, but you can delay playback a bitOur example here would be a telephone call.  We clearly can’t delay the speech for many seconds as it interferes with our brain’s speech processing, as many of us have experienced when things go wrong on long distance telephone calls.  But we canhold the packet playback back slightly. The technique uses a jitter buffer (perhaps more properly should be called an anti-jitter buffer!). Packets are stored in the jitter buffer (in the correct order) and then played back at an even speed thus sorting out the audio.  The problem here is that this buffer cannot be too large because “we” notice. Humans will usually notice round-trip voice delays of over 250ms. The ITU (International Telecommunications Union) recommends a maximum of 150 ms one-way latency (300ms round trip).  Remember that (from Part 1) a satellite phone call via a GEO satellite will easily exceed 300ms even with no jitter present, though LEOs (like Iridium) and MEOs (like O3b) don’t suffer this basic very high latency due to them being much closer to us. Streaming Audio Example As an example, let’s design a streaming protocol for uncompressed audio CD data over a network. Now, CD audio has a stereo bit rate of 1.4112 Mbps.  This is made up of 44,100 samples per second and each sample uses 16 bits per channel (so for stereo that’s 2 channels – 32bits per sample).  I mentioned that data networks use packets, so we could put every 32 bit (4byte) sample into a packet and then send them evenly spaced to get 44.1K samples per second.  The problem is that packets have a minimum size and lots of overhead (addressing, checksum, minimum packet size etc).  It would be like sending 4 passengers in a 40 seat coach – lots of overhead and waste.  For Ethernet packets containing IPv4 and UDP formatted packets the overhead would be 14 + 20 + 8 +4 = 46 bytes – Just to send 4 bytes of data! And that completely ignores the fact that “Layer 2” frames like Ethernet have a minimum packet size – 64 bytes in the case of Ethernet. So for 4 bytes of valuable data we’d be sending 16 times that in headers and wasted space – a bit rate of ~18.5Mbps.  Outrageously high for most networks and an impossible waste for a satellite network! So instead we’ll

What Causes Satellite Loss and Error?

This blog is Part 3 of a 3 part blog and concentrates on Errors, Loss, the effect of atmospheric conditions and choice of wavebands. Part 1 dealt with Distance, Latencies and Orbits and Part 2 looked at Jitter. Atmospheric Conditions and SATCOMS Data Transmission There’s no doubt that in an ideal world the transmission from Ground to Satellite and vice versa would be error free, and if that were the case there would be nothing to say here.  The bottom line is that they are not error free and the problems occur primarily due to atmospheric conditions.  So what are these conditions?  Well, it all starts with the Sun and goes down from there.  We may have: Well that’s not an exhaustive list, but you get the idea. So any of these factors can produce an error in the transmission stream and the more you get, the harder they are to deal with. Increasing Bandwidth vs Transmission Quality Let’s talk about the physical transmission layer (OSI Layer 1) in SATCOMS and note that it might be in Bits or Symbols.   What’s a Symbol?  Well SATCOMS looks at transmission at the lowest level in Hz (Hertz – cycles per second).  Now we could send data one bit per cycle in the standard binary fashion, as happens in wired and optical circuits, or if conditions allowed we might try to have several different “levels” per cycle and encode 2, 4, 8, 16,  32 bits or more in a cycle. These are called Symbols.   Technically, the method of doing this is to modulate the signal. In a popular form of modulation: 64-QAM (64 Level Quadrature Amplitude Modulation) both amplitude and phase modulation are used. The problem is, the higher the encoding levels, the better we need the transmission quality to be and all the atmospheric factors mentioned above can dash that by interfering with amplitude and phase, and more. A solution to this is to use Forward Error Correction (FEC) but this decreases net throughput – see below for more on Forward Error Correction. A Quick Look at how Data is Transmitted in Packets As humans we tend to think of transmitting bytes of data.  We have “data plans” for so many Gigabytes per month, but data is not normally transferred in bytes between systems.  Instead it is transferred in packets (blocks of bytes), aka Frames.  These packets consist of a: Now the data may itself contain a sort of sub-packet i.e. have a Header and Data itself, and if you think that’s uncommon – no it absolutely isn’t:  in most businesses and homes IP packets are sent inside Ethernet Packets. Why is data transmitted this way?  Because a typical network operates like the “Post Office” handling network traffic on behalf of many customers.  A packet is, to the network, like a letter is to the post office – it contains address information, including sender information, so that packets can be delivered to a variety of destinations and the recipient knows where they came from.   If we sent one byte at a time it would still need a header and so the amount of header information would exceed the actual data we were transmitting by huge amounts – what a waste of bandwidth that would be! The OSI Network Layer Model (diagram below courtesy of Wikipedia), lays out how these packets, and packets-in-packets, are carried starting at the physical layer. OSI Layer Protocol Data Unit Function Host Layer 7 – Application Data High-level APIs, including resource sharing, remote file access Host Layer 6 – Presentation Data Translation of data between a networking service and an application; including character encoding, data compression and encryption/decryption Host Layer 5 – Session Data Managing communication sessions, i.e., continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes Host Layer 4 – Transport Segment Datagram Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing Host Layer 3 – Network Packet Structuring and managing a multi-node network, including addressing, routing and traffic control Media Layer 2 – Data Link Frame Reliable transmission of data frames between two nodes connected by a physical layer Media Layer 1 – Physical Bit, Symbol Transmission and reception of raw bit streams over a physical medium So in our SATCOMS example the lowest layers (closest to the physical) are: From Layer 3 up the packets are the same as the ones our computers, devices, phones etc generate. Forward Error Correction (FEC) A standard network approach to error correction is to send data, then wait for an acknowledgement (ACK) from the receiver and if none is received then resend the data. Well at least that’s it at its most simple. This kind of system is used by the TCP part of the IP networking stack, for example. The problem with this method is that if you have large round trip latencies of, say, 700ms (GEO orbit) then it will take over 700ms to get a retransmission of the data. This would seriously hamper transmission rates. Enter Forward Error Correction (FEC): If we can send some redundant data with the real data that allows for the correction of one or more errored bits then we don’t need to retransmit the data – thus saving at least 700ms in the example above, at the expense of sending more data than required.   What happens when atmospheric conditions disturb transmission   What about Wavebands? There is no magic formula for how wavebands perform because different SATCOMS providers may have different power outputs and therefore, better Signal to Noise ratio, but the general trends are: Higher Frequencies implies higher throughput & higher susceptibility to attenuation by rain/cloud etc.   Here is a table we put together as a quick guide, but the more we looked into it the more complex it got as you have to take individual services into account due to power, adaptive coding and modulation (ACM), or not etc.Show 102550100 entriesSearch: Waveband Frequency Throughput (Bandwidth) Rain/Cloud Resilience L-Band 1-2Ghz 400kbps Premium C-Band (inc

Why is Satellite Latency High?

Why is satellite latency so high? (and it’s not all to do with distance) This blog is Part 1 of a 3-part blog and concentrates on high latency, the subsequent parts will discuss jitter, the effect of atmospheric conditions and choice of wavebands. If we want to our applications to be available globally we have to bear in mind that there are lots of places on the Earth where wired and mobile (or other wireless communications) are not available, for example: – In remote places (some less remote than you might think!) – On aircraft flying over oceans or remote locations– On ships distant from the shore– In certain military situations– Certain emergency services requirements, i.e. first responder Just about the only reasonable way to serve these locations is to use satellite communications, provided you have a view of the sky that is. Due to their height satellites have a very large coverage (up to slightly less that 50% of the Earth’s surface you might think, but more on that in a moment). It sounds fantastic.  Why do we bother with any other kind of communication system then, after all most of us could mount a satellite antenna on outside of our offices, houses etc.?  The answers are both technical and commercial: – Satellite bandwidth has traditionally been very expensive– Until recently satellite bandwidth has been very limited (128Kbps, 256Kbps)– Satellites traditionally added a large amount of latency (delay) to network traffic– Jitter is created which can affect streaming applications e.g. voice, video, rapid telemetry– Adverse weather can cause large or almost complete data loss But that was then.  Things are changing: – Bandwidth is becoming more available and less expensive, though still pricey compared to wired and mobile– Lower satellite orbits are becoming available, which reduce latency– There is an increased choice of wavebands which have better penetration of the atmosphere and rain The technical factors “stress” applications in ways not encountered with other networks and if we want our applications to work we will need to test them in satellite networks and employ programming strategies which work around those stresses. In this blog I’m going to focus on high satellite latency and its implication on applications.  Look out for further blogs on jitter, the effect of atmospheric conditions and choice of wavebands. Why is satellite latency high? Anyone who has used the ping utility tool will have seen some very high ping times at times. For example, to many points opposite you on the Earth (antipodean) you may see ping times via undersea cable routes of 300ms, but you can see big ping times even to nearer locations – this is due to queuing i.e. the links are busy and your data has to queue which causes delay. A satellite ping time might be 700ms or more and that’s nothing to do with queuing.  Why’s that?Because the satellite is very high indeed.  Traditional Geosynchronous (aka Geostationary or simply GEO) satellites sit 22,236 miles (35,786 Km) above sea level, which is a very long way away, in fact almost 3 Earth diameters (just under 1 circumference) away.  No wonder it takes so long to ping to the other side: your ping and the response has to go up and down to the satellite 4 times since you’re not, in general, directly under the satellite. Simple solution, make them lower, yes indeed and that has been thought of.  There are offerings for LEO (Low Earth Orbit and MEO (Medium Earth Orbit) but they have a problem.  At anything but the GEO orbit level the satellite cannot maintain position in one spot over the Earth without needing to be driven all the time (which is impracticable), so at the lower orbits the satellite are moving, relative to the Earth’s surface, sometimes very quickly.  Not so convenient for fixing on them, then. Lastly to get round the back of the Earth will take more than one hop and that implies even more latency. So how does this high latency impact your application? Over a GEO satellite any chatty application will have to wait for a response from the server side (or client depending on the direction) which will take 700ms to perform.   Compare that to LANs (Local Area Networks) with 1-3ms response times and WANs (Wide Area Networks) with 10-300ms delays – depending on end point locations with mobile and wireless networks adding somewhat to this. It’s clear that after a few round trip conversations an application may be intolerably slow or even timeout, compared to functioning well in wired or wireless networks. And how can you fix satellite high latency issues with applications? Clearly you can’t just move the satellite!  Though, there are lower orbit satellite options available that may work for you both in terms of service provided and budget. If you need to work with the satellite as it is, then changing the way your software communicates offers the best solution: – Change timeout values on network requests to allow for higher latencies– Use overlapping network requests and responses where possible –  instead of waiting for something to complete before requesting the next thing– Caching more data where possible means you don’t need the network all the time Sometimes you can use other software or equipment solutions to help you out, but they can be expensive. For example, if fundamentally, you have a data transfer issue i.e. the latency prevents you from using up the bandwidth available to you in the service, then, if you use standard transfer methods (ftp, Microsoft CIFS or your custom application that uses TCP to transfer blocks of data etc.) you can get equipment and/or software that may cache on your behalf and/or acknowledge receipt of data locally to the transmitter thus avoiding the latency.  These are termed WAN optimisation solutions, but they don’t work in all cases, they can be expensive and they, themselves can cause issues. But how do you know whether you have any issues with your application in satellite networks at all Dirty word coming here: You need to “Test” That may not

iPerf can be wrong!

We all love Open Source and I’m no exception.  Much of the world’s computing is powered by Linux, including devices like your home router! But this doesn’t mean that it’s perfect, and the chances are that the less people that use a particular bit of Open Source, the less well tested it is.  So Linux gets a lot of use and has a lot of contributing developers, but tools like iPerf, widely used for network performance measurement, don’t. Why’s he mentioning iPerf I hear you say, well it all started with an email from one of our customers… Before I start though, a bit of background.  At Calnex we make Network Emulation products.  We also call them Software Defined Test Network products because that’s so much more indicative of how our customers use them i.e. our customers want to create test networks, so that they can try out how their software, protocols or devices will work in particular networks environments.  These test networks are “virtual” because you don’t have to get access to an actual physical network, our Emulators recreate these conditions on demand. So sometimes customers want to see how our software defined test network (i.e. our emulator) is behaving, especially when they’re new to the product, and of course they need a measurement tool.   What to use? A quick google search, or asking a colleague, or prior knowledge nets the Open Source product iPerf – in fact iPerf3 in it’s current guise. So this customer – I’ll call him John, emails in: John: “Hi Frank, When we run a test without impairment [any network condition which makes the network not perfect…], the system works as expected. If we run a single impairment (e.g. bandwidth only, or latency only) we see consistent results. But when we stack two impairments together we notice a significant drop in throughput, below the listed bandwidth. For example, if we configure a 1Mbps bandwidth limit and also 5ms delay, we notice less than 1 Mbps throughput average. This may be expected for a significant latency delay (e.g. 800ms) but we assumed that a small consistent latency delay would not significantly hamper total/average throughput but it apparently does.  Ideally, we would like to know if this is known and expected, or rather, if it is a limitation of our setup, or maybe it’s the norm for all simulators.”   Me [thinking to myself….]: “That’s very odd.  We routinely test things like this and have not seen this kind of issue.  As most people run iPerf3 in TCP mode (the default) there can be some big misunderstandings about how latency affects throughput (i.e. Bandwidth used).  But John is right, 5ms of latency is very little. So I decide to do the test myself.” Me: “Hi John, We can’t find any issue with your parameter combination.  Here are our test settings:  – which in fact makes the latency 10ms RTT – 5ms in each direction. We used iPerf3 in TCP mode to do a test. Here are the screens: iPerf client on the left and iPerf server on the right.  Client command: iperf3 -c, Server command: iperf3 -s – You can see that the bandwidth is regulated correctly (I’m not sure why iPerf3 shows a difference on the client and server side bandwidths). and here are the graphs from the NE-ONE Professional’s graphing menu option: You can see that the client sends data faster than 1Mbps (right hand graph – Bits Received Per Second) – typical TCP.  ‘We’ then regulate the bandwidth (and queue) and send it out to the server (left hand graph Bit’s Sent per second) – notice that it is smoother. [The difference between the graphs is that the received graph is pre-impairment and the sent graph is post-impairment] – The green line is the reverse flow direction – ACKs. So no problem here.” John: “Ah, we actually run the [iperf3] command using UDP packets, for 10 seconds, set to 1Gbps (regardless of bandwidth impairment setting on emulator). We limit UDP datagrams to 1000 bytes to avoid processing delays on the receiving end due to fragmentation.” Me: [Thinking to myself…] “Ah, UDP usually creates fewer misunderstandings than TCP (due to the lack of flow regulation which TCP has).  I wonder what’s going on?” Me:  “Please can you send us your iPerf3 command lines” John: “Here is client-side command example: iperf3 -V -u -b 1Gbps -f m -t 10 –length 1000 -c <IPADDRESS> Server is standard but with verbose: Iperf3 -s -f m -V My NE-ONE Professional configuration is a copy of the “LAN_NO_IMPAIRMENT” sample, modified with changes to latency and bandwidth.” Me [Testing…] I set up the test with John’s commands and find that, yes, iPerf3 is reporting, at the server side, that we’re only transmitting 0.33 (0.20, 0.26…. – see below) Mbps, not ~1Mbps as required. However, our live graphs correctly show ~1Mbps though. Very odd indeed…   I go to our developers, who say that they routinely test this, and given that our graphs are showing the correct values they suggest that iperf3 is reporting the wrong values.  They ask me to find an independent method of verifying the bandwidth. I look at using tcpdump (on the server) and analysing the pcap file produced, but in the end for simplicity decide on a tool that will report on Ethernet interface I/O statistics. I try 3 of them and all agree that the NE-ONE Professional is working correctly, but in the end I settle on ifstat as it produces an easy to read continuous output.  Time to report my findings to John. Me: “Hi John,  Thanks very much for your command line. I ran your test and got some weird results – just as you said.  I went to our developers for advice and they strongly suggested that iPerf3 was wrong and asked me to monitor the packets at the server end. After a bit of trial and error I settled on using the tool ifstat (sudo apt install ifstat).  And yes iPerf3 is getting it wrong. Here’s my setup: NE-ONE Professional: