Any chance you could re-run with `-P 4` where 4 is the core count?
Modern A/M cores and Zen 5 cores individually have enough grunt to handle at least 10 Gbps through USB without a hitch.
On my Pi's and N100 mini PCs, I do have to use threads to hit more than about 5-6 Gbps. And testing a 25 Gbps adapter I'm testing separately, I had to use multiple threads to get my Ampere CPU to measure speeds greater than 10 Gbps.
Besides, I can’t think of a typical single threaded application that would use those data rates, can you?
I have absolutely no idea what anyone means when they say USB 3.2 gen 2x2. I used to know what USB 3.2 meant but it's certainly not that.
USB 5 Gb/s = USB 3.2 gen 1, available on Type A or Type C connectors (or on devices on a special extended micro B connector)
USB 10 Gb/s = USB 3.2 gen 2, available on Type A or Type C connectors
USB 20 Gb/s = USB 3.2 gen 2x2, available only on Type C connectors
Moreover, "5 Gb/s" is a marketing lie. The so-called USB of 5 Gb/s has a speed of 4 Gb/s (the same as PCIe 2.0). On the other hand, 10 Gb/s and 20 Gb/s, have the claimed speeds, so USB of 10 Gb/s is 2.5 times faster than USB of 5 Gb/s, not 2 times faster.
10 Gb/s USB and Ethernet have truly the same speed, but the USB overhead is somewhat higher, leading to a somewhat lower speed. However, the speed shown in TFA, not much higher than 7 Gb/s seems too low, and it may be caused by the Windows drivers. It is possible that on other operating systems, e.g. Linux, one can get a higher transfer speed.
Unfortunately, there are too many who do not do this, even among the biggest computer vendors.
Unfortunately it's not true.
Quiz: what happens when a device capable of 20Gbps is plugged into a port marked as 40Gbps?
Because if not then it's the same as any specification for connecting devices that allows for multiple speeds. It runs at the lowest of the max speeds supported of everything in the chain.
It will not.
Consumers would expect plugging a 20Gbps device into a 40Gbps port should result in 20Gbps negotiated speed. In reality it will mostly likely end up at 10Gbps (or less) because of the mess.
Newer Thunderbolt/USB 4 devices do not have any technical reason for preventing them to work as USB 3.2 2x2, i.e. to work at 20 Gb/s when plugged into a 20 Gb/s host port, and vice-versa for 20 Gb/s devices plugged into a USB 4/Thunderbolt host port, because both Thunderbolt and 20 Gb/s USB need the same wires in the cable and connector.
I do not know if all USB 4 controllers also work at 20 Gb/s (USB 3.2 2x2), but if they do not work that should be considered a bug.
But then they decided to memory hole that and now USB 3.0 and USB 3.1 are also USB 3.2 and USB 3.2 is called "generation 2x2", whatever that is supposed to mean
It makes no sense anymore. It used to be quite simple.
5 and 10 Gbps were renamed, though.
5 Gbps first was USB 3.0, then 3.1 Gen 1, then 3.2 Gen 1.
10 Gbps first was 3.1 Gen 2, then 3.2 Gen 2x1.
3.2 Gen 1x2 is also 10 Gbps, but physically different
It's not a lie, the b just stands for baud not bit ;-)
Previously to these standards promoted by Intel, the 1 Gb/s Ethernet used the same encoding and it was rightly called by everybody "1 Gb/s", not "1.25 Gb/s", because the gross bit rate has absolutely no importance for the users of a communication standard.
Only Intel invented this marketing trick, calling PCIe 1.0 and 2.0 as 2.5 and 5 Gb/s, instead of 2 and 4 Gb/s, and similarly for USB and SATA, where e.g. SATA 3 is called 6 Gb/s, but its speed is 4.8 Gb/s.
To be fair, what Intel did was not unusual, because in the computing industry there has been a long tradition of using fake numbers in marketing for various things, like scanner or video camera resolution ("digital" zoom, "interpolated" resolution), magnetic tape capacity ("compressed" capacity), and many others.
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It probably looks the same no matter what, and the cable selected to use probably also won't be very forthcoming with its capabilities either.
(Be sure to drink your Ovaltine.)
This was neither standarized nor enforced, yet it worked remarkably well in the real world
Then we decided to just have no markings at all on USB C cables. On the ports at least we occasionally get little thunderbolt or power symbols
The problem is that there are too many uses for one connector. But this is wha we wanted - a reduced number of standardized connector/power options.
Has lead to some very embarrassing “works on my computer” situations on prototype boards shared with my EE colleagues (I’m a software guy who dabbles in hardware when I need to)
This may be a matter of semantics, but I can't bring myself to call a resistor a negotiator. They only do one thing and they're very resistant to other options. :)
With nothing connected to the CC line(s) at all, then there should be no output voltage on Vcc. It shouldn't be 5v @ 3a, or 500mA, or anything else -- it should be ~exactly 0v, and therefore also 0a.
A resistor or two tells the power source what we want. Without it (or some, you know, actual PD negotiations), we get nothing.
---
A careful reader will note the repeated quantity distinction. Let me explain that.
Every USB C socket has both CC1 and CC2 pins. They're on opposite side of the connector and get used for sorting out PD, and for detecting the cable's connector orientation (if/when that matters).
But a cromulent USB C to USB C cable can have just 1 CC wire, and that's OK. It works; it isn't even wrong. To get such a cable to coax 5v from a 5v/3a source and get power for a prototype widget on Gilligan's Island, with the cable already cut in half to get at the wires inside: Wire up power and ground to your prototype. And put a 5.1k resistor between that single CC wire and ground. Voila: We've requested 5v at up to 3a.
Or: If we're being a bit more proper and snooty and want to do it The Right Way, and we actually have a USB C jack to prototype with, then that more-ideally takes two 5.1k resistors; one to pull CC1 to ground, and another to pull CC2 to ground. This does the same thing, but it does it on the connector side of things instead of the daunting no-mans-land of wires. Only one of these resistors will ever be used at one time.
Or: If we have a USB C jack and can only scrounge up one 5.1k resistor (maybe we only have a single #2 pencil to whittle down to 5.1k of resistance), or we're being particularly lazy, then that's OK too. Pick CC1 or CC2 and put 5.1k between there and ground. It will work with the cable plugged in one way, and it won't work with the cable flipped 180 degrees. That can be enough to get a thing done for the moment or whatever. (There's no solution that is as permanent as a temporary one.)
---
These are some of the things I learned when I was in the field and needed a 5v, >2.5a power supply to replace one that had died. I said to myself, "Self, just go over to Wal-Mart and get a 3a USB C power brick that comes with a cable, cut and splice that cable to fit the widget that needs power, and call it done. If it dies in the future, replacing it will be intuitive and fast."
So dumb ol' me went to Wal-Mart and bought exactly that, and I quite confidently set forth with the splicing.
This did not work. At all.
And that was a harsh rabbit hole to dive into, but it was ultimately fine. After I got back that evening I soldered a 5.1k resistor (of 1206 SMD form) mid-span between the CC wire and ground, and finished the adapter-cable quite neatly with some adhesive-lined shrink tubing.
Doing it this way got the customer's gear working faster than ordering the "right" parts and waiting for them show up would have, and it still works. That's all been a few years ago now; I consider it to be as permanent as anything ever really is.
It gets even worse.
I now have two cheap Chinese gadgets (a checki printer and a tire inflater) that have USB-C ports for charging, but will only charge with the wire that came with the gadget. The other end of which is an old-style USB plug.
It seems that USB-C sockets are cheap enough parts to use them for everything, even if the manufacturer isn't going to put any actual USB circuitry behind them.
Edit: Three. I forgot about my wife's illuminated makeup mirror.
Very annoying though! The devices are just missing a couple resistors which is probably less than a cent on the BOM.
Bonus: YOu can enable USB 2.0 data transfer as well for firmware updates, computer interfaces etc.
So: Cheap/ubiquitous part, everyone has cables + AC adapters to their local plug: I think it's a great default power connector.
I wish these devices would just use barrel jacks, labeled with the voltage and polarity. But these manufacturers know that the USB-C port weighs into buying decisions (and they know that most people have zero clue about the difference between a physical port and the electrical/protocol specs).
Also, according to that table, "USB4 Gen 2×2" is a downgrade on "USB 3.2 Gen 2x2", since the cable length is 0.8m instead of 1m for the same speeds. Which is uhh unexpected.
You have to go out of your way to make Apple's Lightning connector look sensible, but somehow the USB consortium has managed to do it.
USB-C moved those to the much cheaper to replace cable. The little strip in the middle makes cleaning a bit harder but does provide for more longevity. It's s necessary evil in order to have the spring contacts on the plug side as well as not having them exposed to touch.
I think the plug side of USB is pretty well designed. The problem is more with the electrical and signalling side and the marketing of the different versions.
Rather than some absurd version number it’s now just “USB 20 Gbits”
Much easier and reliable than navigating the confusing sea of USB standards
Welcome to the brave new world we will enter in far future.
- 10GbE Thunderbolt adapter is still the best. Full symmetrical 10GbE on laptops as far back as the 2018 MacBook Pro 13" (Intel) and every laptop since. Including the Airs starting with the M1 chip (Not sure about Neo).
- No Apple hardware supports the 3.2 v2x2 standard (20Gbps) and your connection will be downgraded to 10Gbps on these RTL8159 chips. Because of processing overhead, you will only get 5-7Gbps of total Ethernet throughput.
- Upgraded Mac Mini or Apple Studio base models have builtin 10GbE ports
For now, thunderbolt adapters are still the most reliable 10GbE for Apple laptops.
The neo doesn't have thunderbolt at all so no, that won't fly.
"Card supports 10Gbit/s and 10/100/1000/2500/5000/10000Mbit/s Ethernet"
Nice to see; some NICs are shedding 10/100 support. Apparently, it's not necessary to do this, even in a low cost device.
100BASE-TX uses just two pairs (lanes), one for sending and one for receiving. 1000BASE-T uses all four pairs, for both sending and receiving. Therefore, a 100BASE-TX interface that's only receiving needs to power up one pair. A 1000BASE-T interface needs to power all four pairs all the time.
I recall reading about some extensions that allow switching off some of the pairs some of the time ("Green Ethernet"), but I think that they require support on both sides of the link, and I'm not sure if they are widely deployed.
The cable was chewed through by cats, so perhaps it was three just in that moment.
The connection was overall unreliable, so I guess it must have been four, just not all of the time.
Ah, the old Cat-3 cable. Been there.
For regular Ethernet, the switch will have a table of which IPs are on which NIC and thus can dynamically send packets at the right transmission protocols supported by those NICs without degrading the service of other NICs.
100m is fine. 10m is fine but I can’t think of anything that negotiates 10m other than maybe WOL (I don’t use it enough to be sure from memory).
If I didn ahve something esoteric it would be on a specialised vlan anyway.
It's not, cf. sibling posts. The GP probably learned networking in the 80ies~90ies when it was true, but those times are long gone.
(unless you're talking wifi.)
If this is the same adapter in a different housing, will it also be limited to 7Gbps?
You don't need Cat7 for 10G.
Cat6 is spec compliant up to 55mm. Cat6a to 100m, which is the same as Cat7.
If you're doing short runs like to a nearby switch, good Cat5e works fine in practice. I've run 10G over Cat5e through the walls for medium runs without errors because it's all I had. It works in many cases, but you're out of spec.
I use DAC where I can, but most people just want something they can plug into that RJ45 port in their wall that goes to the room down the hall where they put their switch.
There are several SFP+ to Thunderbolt/USB4 adapters on the market. Not cheap, though.
For laptops I assume you need USB/Thunderbolt adapters. (Still no SFP+ or SFP28 module for Framework?)
For desktops you'd use an SFP28 card (taking up a PCIe slot).
For devices like Raspberry Pi's, etc. you'd use... local RJ45 switches with optical uplink ports?
Most of my devices only need 1G or even 100Mbps. No reason to switch to fiber. 1G/2.5G copper ports don’t use that much power.
For 10G+ things, it’s fiber or DAC first if possible then RJ45 if it’s the only option.
Then my backhaul between rooms is just single mode fiber, good up to 800G. Plug in a small switch at the end and you go back to RJ45 and PoE.
I only have 10G though (to transfer large files/RAWs between my computer and my storage). Something faster would be nice because NVMe SSDs can go 50G+ but that equipment is pricey and power hungry.
The convenience and flexibility of PoE would always push me towards copper wiring.
Not even close to being true, unless you specifically mean 10Gbps over twisted pair (Cat6/7) cable. SFP+ is the default on a ton of network gear still.
Edit to add: If you want an example, these are the NVidia ConnectX nics available from FS.com, the lowest end one is 25g, then 100g, 200g etc.
Ethernet is media independent. Yes, yes, it was first classified for thick net, but ethernet over twisted pair (rj45 typically) is still ethernet despite the lack of vampire taps. You can run ethernet on thick or thin coax, twisted pair, dac, fiber, or even over the ether so to speak.
That said, 10g over rj45 is pretty handy when you have existing wire in walls. In my experience, it runs fine on the cat5 (not even cat5e) that's already there. Maybe it won't work on all my runs, especially if I tried all at once, but so far, I'm two for two.
The spec is for ~ 100m in dense conduit; real world runs in homes are typically shorter and with less dense cabling... and cabling often exceeds the spec it's marked for, so there's wiggle room.
As for allowing to switch to fiber, that just seems orthogonal again to what these USB NICs are for, not to mention the SFP+ itself is probably more expensive than the NIC shown here...
The other side will then also need a low power NIC (of which fiber and DAC over SFP+ are less power hungry). What this article doesn't mention, is that there are also a lot of PCIe NICs on the market which aren't power hungry (RTL8127), as well as RTL8261C for switches/routers.
I've seen low power RTL NICs with SFP+ on it, too (example: [1]). With SFP+, you'll have a lot more versatility. DAC and SFP+ fiber are very cheap, btw. Especially second hand they go for virtually nothing. I have 10 SFP+ fiber lying around here doing nothing which I got for a few EUR each.
For me as European with high energy prices and solar energy gotten the beat next year (in NL), this is all very interesting.
There's a couple of good reasons why to opt for fiber in the home. You keep the energy between the different groups separated which can help. I also find fiber very easy to get through walls, allowing me to have multiple fiber connections through walls (currently I use 1x fiber + 1x ethernet for PoE possibilities from fusebox).
With all above being said, AQC100S is low power and does not get very hot. You can get these with SFP+ and PCIe/TB. They've been available for a while.
[1] https://nl.aliexpress.com/item/1005011733192115.html (no vouching for, just first hit on search)
I suppose an NVME riser is also an option, albeit janky.
edit: on looking closer, that still seems to be an x4 card.
In fact I had more trouble getting quality fiber working for that sort of distance than El Cheapo cat5. They do heat up a bit, but they work wonder.
Sure some of it might have been fine at 2.5 or 5 but those are relatively new and less commonly available.
Verizon's been issuing a wireless router with 10G WAN and several 2.5G ports and MoCA support that includes a 2.5G adapter and they use that across all their current connection types. I was delighted to see that when I got the router a couple years ago.
There is also a glut of 40 Gbps stuff on the market because it's a dead end technology and most ISPs went straight to 100 for things like aggregation switch to router links. Not that I would encourage anyone to go whole hog on 40 Gbps just because, but if you can get a transceiver for $15, NICs for $30, and maybe you get a switch for free from electronics recycling or for 80 bucks, and can tolerate its noise and heat output...
I have seen plenty of people throw decommissioned 40 Gbps stuff straight into electronics recycling bins.
Mellanox ConnectX-3 40 Gbps QSFP NICs are literally 20 bucks on ebay.
SFP+'s and fiber are cheap, like maybe 50 bucks for the SFP+ set and fiber. 10Gb PCIe cards are maybe ~$50 new on Amazon with Intel chips and cheaper on eBay - I bought used 10 Gb Mellanox cards for $25 each - "they just work" under FreeBSD and Linux.
Copper 10 Gb used to consume waaaaay more power (like 5+W per port!) and cost more both in terms of the SFP and cable. In reality fiber is more environmentally friendly as there is no copper, less energy used, and less plastic per meter. So my setup mostly consists of SR and BR optics and DAC's. The "DAC" direct attach cables are handy for switch-switch or short switch<->NIC runs. And I will continue to run fiber for the foreseeable future and actively avoid copper.
San Francisco checking in.
That corresponds to $50 and $105-130 in today's money.
Now you can get it 10 times faster with an OK management layer for $150. This is after a -long- time of 10gbps prices stagnating.
10gbps is unexpectedly cheap.
as an aside: for pricing, 20 years ago unmanaged 1G-BaseT ethernet switches were $20/port. That's the region 10G-BaseT switches occupy right now if they use realtek chips. And multiple sources confirm the realtek switch can do full line rate on all ports simultaneiously with a normal 1500 MTU
How much would they need to cost before you'd consider it cheap? If you want CHEAP then 10GbE is not for you in 2026.
The Mikrotik switches [1] work technically speaking but they are quite difficult to configure. You have to pull them from your network, connect physically to a specific port, force your machine onto a specific IP, connect to a specific IP. I could not get this to work in macOS nor Ubuntu despite hours of futzing with it. They both kept infuriatingly overriding my changes to the IP. I was only able to get this to work on an old Windows 10 laptop.
Once you do get their web UI up, you pray the password on the sticker on the bottom works. Neither of mine did and I had to firmware reset both and find the default password online. The web UI itself holds no hands. It's straight out of 1995, largely unstyled HTML. While using both of my devices the backend the UI talked to would crash and log me out about every five minutes. Not every five minutes after log in. Every 5 minutes wall time!
The Mikrotik switches are also fanless, and 10GbE SFP+ adapters throw off a lot of heat. If you use more than one they overheat. You can just about get away with two if you put them on opposite sides but I would not recommend it.
I've also had very mixed luck with SFP+ module compatibility with this thing. I had a number of modules that refused to run at higher than one GB, hence my fighting to get into the UI. Despite a ton of futzing between logouts I was not able to get them to work at 10Gb and returned them.
I'll be honest, my Mikrotik switches have been infuriating. I replaced one of them with a Ubiquiti Pro XG 8 8-Port 10G and holy crap the difference is night and day. It just works. Everything worked straight from the box day one, I can configure it from my phone or the web, I highly recommend this thing.
The Ubiquiti switches are multiple times more expensive but if you value your time they're well worth the price. I still have two of the Mikrotik switches on my network but am completely intent on replacing them. The Ubiquiti is worth it for online configuration alone. No need to pull the thing from your network, test your changes immediately!
> The Mikrotik switches are also fanless, and 10GbE SFP+ adapters throw off a lot of heat.
If you are talking about copper SFP's, then that's the problem: copper. It takes a lot of energy to drive a wire at GHz speeds, not so much with an optical link (though it's getting much better.) I have only ever felt luke warm optical and DAC SFP's. Copper 10 Gb SFP's are burning hot. I avoid using copper and run fiber.
~ 1 GB/sec seems about right for a long time. I can't imagine the basic files I work with everyday getting much more storage-dense than they are in 2026.
For writes yes 10GbE overkill but for for reads it's faster than 2.5GbE would be.
Sure there is 5GbE but most switches that support 5GbE support 10GbE.
Last time when I checked, dual-port 25 Gb/s NICs were not much more expensive than dual-port 10 Gb/s NICs.
If you have a few computers with no more than a few meters distance between them, you can put a dual-port 25 Gb/s card in each and connect them directly with direct attach copper cables, in a daisy chain or in a ring, without an expensive switch.
For 40+ GbE or fibre I agree they are expensive, but at least you get full performance out of your system. SSDs aren't cheap these days either...
If anyone's aware of something better, I'd be interested too :)
(Then again I wouldn't voluntarily use 5Gb-T or 10Gb-T anyway, and ≈50W is enough for most use cases.)
[ed.: https://www.aliexpress.us/item/3256807960919319.html ("2.5GPD2CBT-20V" variant) - actually 2.5G not 1G as I wrote initially]
A lot of laptops won't accept less than 60w
My work laptop won't accept less than 90w (A modern HP, i7 155h with a random low end GPU)
At first everyone at the office just assumed that the USB C wasn't able to charge the pc
Some devices expect USB-A on the charger side instead of C
USB-A pump out 1A5V(5W) regardless of what's connected to it, then it negotiate higher power if available.
USB C-C does not give any power if the receiving device is not able to negotiate it
A 20w charger will definitely charge the MacBook, just slowly.
I can’t recall which cable I used though. The cable might have been garbage but I’m pretty sure I threw out all the older USB cables so they wouldn’t get mixed with more modern supporting cables.
Laptop charges fine regular 5V as well.
When plugged into 100W chargers while powered on, it takes ten minutes to gain a single percentage point. Idle in power save may let me charge the thing in a few hours. If I start playing video, the battery slowly drains.
If your laptop is part space heater, like most laptops with Nvidia GPUs in them seem to be, using a low power adapter like that is pretty useless.
Also, 100W chargers are what, 25 euros these days? An OEM charger costs about 120 so the USB-C plan still works out.
Other manufacturers do similar things. Apple accepts lower wattage chargers (because that's what they sell themselves) but they ignore two power negotiation standards and only supports the very latest, which isn't in many affordable chargers, limiting the fast charge capacity for third parties.
Mine under very rarely exceeds 10w.
* ≤15W charger: must have 5V
* ≤27W charger: must have 5V & 9V
* ≤45W charger: must have 5V & 9V & 15V
* (OT but worth noting: >60W: requires "chipped" cable.)
* ≤100W charger: must have 5V & 9V & 15V & 20V
(levels above this starting to become relevant for the new 240W stuff)
(36W/12V doesn't exist anymore in PD 3.0. There seems to be a pattern with 140W @ 28V now, and then 240W at 48V, I haven't checked what's actually in the specs now for those, vs. what's just "herd agreement".)
Some devices are built to only charge from 20V, which means you need to buy a 45.000001W (scnr) charger to be sure it'll charge. If I remember correctly, requiring a minimum wattage to charge is permitted by the standard, so if the device requires a 46W charger it can assume it'll get 15V. Not sure about what exactly the spec says there, though.
(Of course the chargers may support higher voltages at lower power, but that'd cost money to build so they pretty much don't.)
NB: the lower voltages are all mandatory to support for higher powered chargers to be spec compliant. Some that don't do that exist — they're not spec compliant.
Varying voltage power supplies are usually capped by current, not power. That's because many of the components, set maximum current and voltage that you must obey independently.
At higher voltages people start accepting higher loses in stuff like cables, because fire-safety becomes a more important concern than efficiency. So the standard relaxes things a little bit.
$ upower -i $(upower -e | grep BAT)
[...]
voltage-min-design: 11.58 V
And I can charge it via USB-C using a 22.5W powerbank @ 12V (HP EliteBook 845 G10.)I guess that would be out of spec then?
edit: nvm I didn't see the qualifier 'minimum'
voltage-min-design: 11.58 V
This has nothing to do with USB-C, this is the minimum design voltage of your lithium ion battery pack. In this case, you have a 4-cell pack, and if the cells drop below 2.895V that means they're physically f*cked and HP would like to sell you a new battery. (Sometimes that can be fixed by trickle charging, depending on how badly f*cked the battery is.)If your laptop's USB-C circuitry were built for it, you could charge it from 5V. (Slowly, of course.) It's not even that much of a stretch given laptops are built with "NVDC"¹ power systems, and any charger input goes into a buck-boost voltage regulator anyway.
¹ google "NVDC power", e.g. https://www.monolithicpower.com/en/learning/resources/batter... (scroll down to it)
https://hackaday.com/2023/08/14/adding-power-over-ethernet-s...
Makes sense, thanks!
https://www.procetpoe.com/poe-usb-converter/ (some of these are power-only)
Surely a matter of time until someone does this…
Might be a struggle I suspect!
The problem comes when you try to design a large network and need random PoE ports on end devices where you can't home-run a cable back.
I have a Unifi Pro XG 48 PoE and I love it, but I still don't use PoE for everything. The cost of a (non unifi) poe device + the cost of using one of those ports always exceeds a simple power adapter on the other side (if possible).
I think about this a lot.
https://shop.poetexas.com/products/gbt-usbc-pd-usbc?variant=...
65W 802.3bt and gigabit Ethernet out on the same PD cable.
Also a crude fixed hub for data and a keyboard and mouse for docking laptops:
https://shop.poetexas.com/products/bt-usbc-a-pd?variant=3938...
Personally I use an x86 PC (supermicro E300 with X11SDV motherboard with integrated Intel X540 10Gbe NICs) running opnsense.
The article should maybe have been focusing on that piece?
At a high level, I'm pretty sure Thunderbolt will be significantly better in all situations:
Thunderbolt is PCIe; depending on the way the network card driver works, the PCIe controller will usually end up doing DMA straight into the buffers the SKB points to, and with io_uring or AF_XDP, these buffers can even be sent down into user space without ever being copied. Also, usually these drivers can take advantage of multiple txqueues and rxqueues (for example, per core or per stream) since they can allocate whatever memory they want for the NIC to write into.
USB is USB; the controller can DMA USB packet data into URBs but they need to be set up for each transaction, and once the data arrives, it's encapsulated in NCM or some other USB format and the kernel usually has to copy or move the frames to get SKBs. The whole thing is sort of fundamentally pull based rather than push based.
But, this is just scratching the surface; I'm sure there are neat tricks that some USB 3.2 NIC drivers can do to reduce overhead and I'd love to read an article where I learned more about that, or even saw some benchmarks that analyzed especially memory controller utilization, kernel CPU time, and performance counters (like cache utilization). Especially at 10G and beyond, a lot of processing becomes memory bandwidth limited and the difference can be extremely significant.
> Thunderbolt is PCIe
Nit: Thunderbolt isn't PCIe, it tunnels PCIe. Depending on chips used, there's bandwidth limits; I vaguely remember 22.5G on older 40G TB Intel chips.
Thunderbolt allows PCIe tunneling, but it has some overhead over raw PCIe. That's why Thunderbolt eGPU setups don't perform as well as plugging the GPU directly into a PCIe slot.
> USB is USB
Until you get to USB4, when USB 4 supports Thunderbolt 4.
> USB 4 supports Thunderbolt 4
It's the opposite! I hate to get into it as I saw the USB naming argument pretty thoroughly enumerated in the comments here already, but the pedantic interpretation is "Thunderbolt 4 is a superset of USB4 which requires implementation of the USB4 PCIe tunneling protocol which is an evolution of the Thunderbolt 3 PCIe tunneling protocol."
From the standpoint of USB-IF a "USB4" host doesn't need to support PCIe tunneling, but Microsoft also (wisely, IMO) put a wrench into this classic USB confusion nightmare by requiring "USB4" ports to support PCIe tunneling for Windows Logo.
Maybe external solid state drive is just too long and it finally had to be sortened somehow.
The user originally wanted to do the transfer over WiFi. I helped them set up the transfer, and they eventually realized it would take multiple months to complete.
I set them up with a Thunderbolt 10GBASE-T Ethernet adapter. The wiring was Cat-6, but the distance was low enough such that 10G would’ve been achievable.
The switches in the network closet were only 1GbE, though the uplinks were 10GbE. Even so, switching the transfer from wireless to 1GbE wired brought our ETA down to just under one month.
I wish we could’ve gotten a 10GBASE-T port for the researcher; that would’ve brought the ETA down from ~1 month to ~1 week.
Does anyone know if the old bulky ones will hit 10G speeds on the same hardware?
I assume I can get a few old TB2 models and adapters on the cheap and they'll run cool enough and stable enough for constant 1G internet and occasional 10G intranet
I still have max 10 gbit here and I'd have to replace 3 switches at least so it won't be coming soon. The 2.5 and 5 options are too meh for me to be interesting.
I hope the arrival of these new chips will increase the number of systems with 10g it and then hopefully the prices of switches will come down too.
Would argue for those purposes 40gig thunderbolt makes a lot more sense.
Of course, just give them some time and they'll come up with USB4 "gen classic" at 11 Mbps.
It is probably the speed of it being read into RAM.
Try entering sync right after copying to see how long it really takes
It beats my previous desktop's RAM speed, what a time to live in.
https://www.aliexpress.com/item/1005008555989592.html
I have one of these, though I'm using with a USB 3.x port as that's what my desktop has. For me it's working fine, and for others with actual USB 4 ports it seems to be working properly for them.
For cables, I think everything converged to cat6a a while ago, which is both reasonably cheap and perfecrly fine for 10G (up to 100m from what I remember)
/* * RealTek 8129/8139 PCI NIC driver * * Supports several extremely cheap PCI 10/100 adapters based on * the RealTek chipset. Datasheets can be obtained from * www.realtek.com.tw. * * Written by Bill Paul <wpaul@ctr.columbia.edu> * Electrical Engineering Department * Columbia University, New York City / / * The RealTek 8139 PCI NIC redefines the meaning of 'low end.' This is * probably the worst PCI ethernet controller ever made, with the possible * exception of the FEAST chip made by SMC. The 8139 supports bus-master * DMA, but it has a terrible interface that nullifies any performance * gains that bus-master DMA usually offers. * * For transmission, the chip offers a series of four TX descriptor * registers. Each transmit frame must be in a contiguous buffer, aligned * on a longword (32-bit) boundary. This means we almost always have to * do mbuf copies in order to transmit a frame, except in the unlikely * case where a) the packet fits into a single mbuf, and b) the packet * is 32-bit aligned within the mbuf's data area. The presence of only * four descriptor registers means that we can never have more than four * packets queued for transmission at any one time. * * Reception is not much better. The driver has to allocate a single large * buffer area (up to 64K in size) into which the chip will DMA received * frames. Because we don't know where within this region received packets * will begin or end, we have no choice but to copy data from the buffer * area into mbufs in order to pass the packets up to the higher protocol * levels. * * It's impossible given this rotten design to really achieve decent * performance at 100Mbps, unless you happen to have a 400Mhz PII or * some equally overmuscled CPU to drive it. * * On the bright side, the 8139 does have a built-in PHY, although * rather than using an MDIO serial interface like most other NICs, the * PHY registers are directly accessible through the 8139's register * space. The 8139 supports autonegotiation, as well as a 64-bit multicast * filter. * * The 8129 chip is an older version of the 8139 that uses an external PHY * chip. The 8129 has a serial MDIO interface for accessing the MII where * the 8139 lets you directly access the on-board PHY registers. We need * to select which interface to use depending on the chip type. */
Oh no!
> /* * RealTek 8129/8139 PCI NIC driver * * Supports several extremely cheap PCI 10/100 adapters based on […]
Also, please, for the love of whatever entity, at least remove the *s on that paste. This is just atrocious and disrespectful of any reader.
Is this just my hardware? It's hard to imagine these issues would be so prevalent with how many people use these on linux...
I never ever saw that and I'm literally using usb-to-ethernet adapters on Linux since forever. It's about the chipset you're using and how the kernel supports it no? For example for 2.5 Gbit/s ethernet if you go with anything with a Realtek RTL8156B (and not the older non 'B') or anything more recent it should work flawlessly.
Before buying I look on the Internet for users' returns / kernel support what the latest chipset the cool kids on the block are using.
As I've been perfectly happy with Realtek 8156B for 2.5 Gbit/s if I wanted to buy a 10 Gbit/s one, I'd look at cool kids, like that Jeff Geerling dude from TFA/Youtube, and see he's using a Realtek 8159 and I'd think: "Oh that's close to mine, I trust that to work very well".
I literally still even have an old USB2.0-to-100Mbit/s that I use daily and that has never failed me neither (it's for an old laptop that I use as some kind of terminal over SSH). I don't recommend 100 Mbit/s: my point is that it's been many moons all this has flawless support under Linux.
> Is this just my hardware?
To me it's due to a poor chipset / poor chipset support in the USB-to-ethernet adapter you're using.
These things, when they're a well supported chipset, are flawless.
Interestingly it seems to get burning hot on the MacBook M1 Pro while it remains cool on the M5 Pro model.
Maybe the workload is different, but I would not rule out some sort of hardware or driver difference. I only use a 1G port on my router at the moment.
I am definitely not the person to shed any light on what is going on, but you've added to my feeling that these adapters are all incomprehensible, so I'll try and do the same for you.
I have a USB C ethernet adapter (a Belkin USB-C to Ethernet + Charge Adapter which I recommend if you need it). I ran out of USB C ports one day, and plugged it through a USB C to USB A adapter instead. I must have done an fast.com speed-test to make sure it wasn't going to slow things down drastically, and found that the latency was lower! Not a huge amount, and I think the max speed was quicker without the adapter. But still, lower latency through a $1.50 Essager USB C to USB A adapter, bought from Shein or Shopee or somewhere silly!
I tried tons of times, back and forward, with the adapter a few times, then without the adapter a few times. Even on multiple laptops. As much as I don't want to, I keep seeing lower latency through this cheap adapter.
Next step, I'll try USB C to USB A, then back through a USB A to USB C adapter. Who knows how fast my internet could be!
It's unfortunate thinking that this is the end, this is as good as it's gonna be, for a while. Especially with usb4 going faster and faster still.
Edit: ah! 25Gbase-t exists, is four pairs. Defined at the same time as 40Gbase-t, 802.3bq-2016. A PAM-16 encoding. Yes, 100Gbe was originally defined as 4x25Gbe for optical but there are base-t.
Also! The 10Gb adapter here is $80. Worth noting for folks that 2.5Gbe adapters are ~$13 and 5Gbe adapters a hair over $20! Very affordable very nice boost. Make use of those USB ports!
I think USB 4 exists based on the Thunderbolt spec (or the other way around?), but doesn't require any Thunderbolt capabilities and therefore isn't very telling.
I think Apple's approach of supporting Thunderbolt 4/5 on every USB port of the MacBook Pro is the only sustainable way forward.
The reason it's smaller to go with USB is that AFAIK thunderbolt only bridges to other interfaces like USB or PCIe. So any thunderbolt NIC is actually thunderbolt -> PCIe, then PCIe -> Ethernet. USB is more often interfaced with directly. 2 big power hungry chips vs 1. 1 < 2 so it is smaller.
Thunderbolt also carries overhead vs oculink. Thunderbolt tunnels PCIe. The PCIe tunnels the ethernet traffic. Oculink is just PCIe, which is why it's not as hot pluggable but gets significant performance increases for PCIe devices. USB in this case tunnels Ethernet traffic. So thunderbolt NICs have 2 layers, USB has 1. 1 < 2. Less overhead means lower power and less heat so smaller heatsinks, fewer chips means smaller board so smaller device. If more devices had oculink connectors, it's highly conceivable that an oculink adapter would also be smaller than a thunderbolt NIC, because again there's no such thing as a thunderbolt NIC just a thunderbolt -> PCIe -> Ethernet.
If its p2p, its easier to just use usb-c inbetween.
Apparently someone doesn't understand my post so let me edit it for the downvote?!... 10G is old tech, its 2026 and the best thing we still have today is a 80$ Adapater while USB-C already can do 5, 10, 20 and 40gb
I'm waiting for 10g network for home for ages now but infra is more expensive, consumes more energy and gets hotter.
What the fuck
It just took them a really long and windy time to get there.
(Fibre is nowhere near as "sensitive" as some people believe.)
What probably would is something like having PCIe and USB to 1Gbps fiber adapters that cost $5.
I suspect the combination of the absence of cheap-o all-in-one AP/router combo boxes with any SFP+ cages and fiber cabling's reputation of being extremely fragile have much more to do with its scarcity at the extremely low end of networking gear than anything else.
[0] This is a two-port SFP+ PCI Express card
https://www.amazon.com/1000Mbps-Network-Performance-Gigabit-...
https://www.amazon.com/SALAN-Ethernet-Portable-Internet-Conv...
But it's not competing with those, it's competing with the copper port which is already built into most devices.
Another thing that would work is something like this (also $5.99), but with one of the ports as fibre:
https://www.amazon.com/Gigabit-Ethernet-Splitter-1000Mbps-In...
The point being you need some cheap way to plug in existing copper devices if you run fibre to the endpoints.
This plus $5 for a transceiver is pretty close at $15:
https://www.amazon.com/Gigabit-Ethernet-Converter-Auto-Negot...
But +$15 and an extra wall outlet per endpoint is still an inconvenience, and if a two-port device with its own power supply can be made for $15 then where is the PCIe/USB to fibre adapter for <$10?
Anyone who talks about 25GBASE-T like it actually exists, doesn't know anything about what they're talking about.
40Gbase-T will never exist, sure. 25Gbase-T very likely will.
To be fair, the power consumption is also my biggest gripe with my WiFi 6 AP, they run extremely hot.
Heck, I don't even know what I should buy for 10G SFP+ ports and a distance of say 30 meters. Guess, I'm back to CAT6 :-)
FS does custom multi-fiber cable assemblies too (beyond the duplex patches which is basically the standard), and they can also include pull eyes on them if that’d be helpful.
Single mode is a good choice, common wisdom used to be multimode for short runs but the single mode stuff is not much more expensive and the standard 10km optics will likely brute force the signal over any mistakes like cable kinks or dirt on the connectors.
If you learned what you need for 10GbT you can learn what you need for 10GbLR. Which is:
LC connector, PC or UPC, duplex, OS1 or OS2, and SFP+ modules saying "LR".
Any of the following is wrong: SC, FC, LSH, E2000, ST, APC, simplex, OM[1-5], "SR" or "ER" SFPs.
And that's short enough.
So IDGAF about how much "better" fiber is. It's unfathomably worse when you factor in the cost and work I'd need to do to convert everything and every new adapter I'd have to buy or build (can I get an $80 USB SFP adapter? Do I have a cable?). The extra marginal cost in electricity will take longer than the lifetime of my equipment to exceed the cost of redoing everything.