That cable has one power input (that is only an input), and two outputs (that are only outputs), and a brainbox in the middle to direct the circus.
If we label the connectors as A, B, and C, then it works like this: A charges B and/or C, and other charging directions are no-op.
The less-complex way is to use a USB A to C cable, if that's appropriate. With these, the A side is always the source and the C side is always the sink.
---
And yeah, it's annoying. I got a cheap lithium car jump starter several years ago with some neat power bank features (like 60W USB PD in/out, on one port). So I plugged it into my phone with USB C at my desk, and discovered that they'd charge eachother seemingly randomly. While changing nothing, I'd look over and sometimes the jump starter would charge the phone, and sometimes the phone would be charging the jump starter. The conglomeration formed a heater, with more steps.
The headphones have equivalent performance whether a USB 2 cable is connected, or a USB 3 cable is connected. The headphones themselves are not USB 3 devices; the addition of USB 3 cabling instead of USB 2 cabling would change absolutely nothing about how they work.
So, no: I wouldn't expect the cable for a pair of headphones (of any price) to support USB 3. That represents extra complexity (literally more wires inside) that is totally irrelevant for the product the cable was sold with. (The cables included with >$1k iPhones don't support USB 3, either.)
Meanwhile: Fast charging. All correctly-made USB C cables support at least 3 amps worth of 20 volts, or 60 Watts. This isn't an added-cost feature; it's just what the bare minimum no-emarker-inside specification requires. A 25-cent USB C-to-C cable from Temu either supports 60W of USB PD, or it is broken and defiant of USB-IF's specifications.
---
Now, of course: The cable could be thinner and more flexible and do these same things. That'd probably be preferred, even: Traditional analog headphones often used very deliberately thin cables with interesting construction (like using Litz wire to reduce the amount of internal plastic insulation) to improve the user's freedom of movement, and help prevent mechanical noise from the cables dragging across clothes and such from being telegraphed to the user's ears.
Using practical cabling was something that headphone makers strived to be good at doing. I'm a little bit annoyed to learn that a once-prestigious company like B&W is shipping cables with headphones that are the antithesis of what practical headphone cables should be.
---
But yeah, both USB C cables and the ports on devices could be better marked so we know WTF they do, to limit the amount of presumption required in the real world. So that a person can tell -- at a glance! -- what charging modes a device accepts or provides, or whether it supports video, or whether it is USB 2 or USB 3, or [...].
Prior to USB C, someone familiar with the tech could look at a device or a cable and generally succeed at visually discerning its function, but that's broadly gone with USB C. What we have instead is just an oblong hole that looks like all of the other oblong holes do.
After complaining about this occasionally since the appearance of USB C a decade or so ago, I've come to realize that most people just don't care about this -- at all. Not even a little bit. Even though these things get used by common people every day, the details are completely out of the scope of their thought processes.
It doesn't have to be this way, but it's not going to change: Unmarked ports are connected together with unmarked cables and thus unknown common capabilities are just how we roll.
The Litz wire point is pretty spot on, traditional headphone manufacturers understood that cable ergonomics mattered. Somewhere in the transition to USB-C, that institutional knowledge just evaporated.
Your last paragraph is depressingly accurate though. I think that's exactly why devices like the Treedix exist: the standards bodies and manufacturers clearly aren't going to fix the marking problem, so now we need test equipment to figure out what our own cables do.
The batteries, the grid/generator-supplied power supplies, and the telephone switch equipment are all connected in parallel -- as if the entire DC power infrastructure consists of only two wires, and everything involved with it connects only to those two wires.
1. In normal operation, the batteries are kept at a constant state of charge. The switches are powered from the same DC bus that keeps the batteries charged.
2. When the power grid goes down, the batteries slowly discharge and keep things running like nothing ever happened (for hours/days/weeks). There is no switchover for this; it's just the normal state, minus the ability to juice-up the batteries. (Remember: It's just one DC bus.)
3. When the grid comes back up (or the generators kick in), the batteries get recharged. There is no switchover for this either; nothing important even notices. (Still just one DC bus.)
4. If the grid stays up long enough, go to 1. Repeat as the external environment dictates. (And as you might guess, it's still one DC bus and there's also no switchover here. Things just continue to work.)
--
You can play with this at home with a capacitor (which loosely acts like a battery does), an LED+resistor combo (which acts as a load), and a small power supply that is appropriate for LED+resistor you've chosen (which acts as the AC-DC converting grid input).
Wire them all 3 parts up in parallel and the light comes on.
Disconnect the power supply, and the light stays on for a bit -- it successfully runs from power stored in the capacitor.
Reconnect the power supply, and the light comes on and the capacitor ("battery") recharges -- concurrently.
Improve staying power by adding more parallel capacitance. Reduce or eliminate it by reducing or eliminating capacitance. Goof around with it; it's fun. (Just don't wire the capacitor backwards. That's less fun.)
That'd be neat. But there's no standard for voltage for home solar: The batteries might be 12, 24, 48v, 60v, or even much more. Meanwhile, the panel arrays commonly output anything as low as 0V and up to ~600V. There's not much for rules and norms here.
Even if we were to standardize a low (<50V) voltage for DC distribution within homes, we'd still need ~120/240VAC to power big stuff, or we'd instead need even-larger conductors (more copper) than we use today to do the same work with low voltage.
But, sure -- we can play it out. So let's say we have an in-home 48VDC distribution standard and decide that this is the path forward and we enshrine it in law.
We need to convert whatever the solar system has available to 48VDC. Then, we need to distribute that 48VDC using a completely separate network of cabling. Finally, we still need to convert 48VDC to whatever it is that devices can actually use.
That's not representative of a reduction in steps, or an increase in efficiency.
That is instead just an increase in installed infrastructure expense, and a decrease in device compatibility. It takes what we have, which is simply universal (at least within any given geographical area) and adds complexity.
Is the juice worth the squeeze, though? Two sets of home wiring voltages? Substantially bigger copper wire inside the walls instead of existing copper, in order to do the same work? Two sets of appliances (of all sizes) on shelves at the store? More adapters?
Billy now needs to bring 2 wall warts to make sure he can charge his portable gear at a friend's house instead of just 1, because he's never sure until he gets there if they've got a 120 or 240v house like they all used to be, a combination house, or if it's one of those solar-only places that only has the weird plugs.
What we have now is 1 cable plant connecting the rooms of a home, and an increasing number of hybrid solar inverters that -- on a sunny day -- cheerfully convert solar power directly from whatever the panels are outputting to the 120/240 VAC wiring that both existing and future appliances know how to use. At night, these hybrid systems do do the same thing from whatever voltage the battery uses and convert that to AC. There's only 1 voltage, and only 1 plug; Billy brings 1 wall wart and knows he can charge his stuff.
To be sure: What we have not strictly ideal, but then neither is changing things without a clear positive benefit.
Again: What's the qualitative advantage of changing this, other than change for the sake of change?
DC might feel nice and neat, but in reality it doesn't seem to be shaped that way at all to me.
It's fun to think about. There's advantages both ways, but I think it leans most-heavily towards keeping AC.
1. One of these is simplicity. With AC, one single home run of cabling (eg, Romex) can feed a whole room full of stuff, like a bedroom or a living room. At one end of the run is a circuit breaker (a fairly simple electromechanical device) and at the other end is a series of outlets (which are physically daisy-chained, but are functionally just wired in parallel with eachother).
Since one single run of cable can feed many devices, it is easy to accomplish.
2. Another advantage is that it is universal. Anything can plug into these outlets. Whatever a person brings into the home to use, they can plug it into an outlet and it works. It works this same way in every home.
3. And there's quite a lot of power available: A common 20A 120v branch circuit cabled up with 12AWG Romex is stated to supply up to 16A continuously, or 1920W. For intermittent loads, it can supply 20A -- or 2400W. That's tiny by European standards, but it's still quite a lot of power. It's plenty to run a space heater when Grandma visits and she complains about the guest room being cold (even as you start to sweat when you cross the threshold to investigate) and a big TV and a whole world of table lamps, all at once. And you can plug this stuff into any outlets in a room, and it Just Works.
4. But, sure: Lots of devices want DC, not AC. So there's a necessary conversion step that is either integral to the device being plugged in, or in the form of the external wall warts we all know very well.
So let's compare to power-over-ethernet.
1. It's also simple, but only tangentially-so. One home-run cable per outlet, whether that outlet is used or not, is something that can be rationalized as being a simple topology. A PoE switch at the head-end instead of a central box with circuit breakers is a simple-enough thing to transition to. And a lot more individual cables are required, but they're relatively small and are generally easier to install.
2. It's standardized, but it's not universal at all. I've got a few PoE widgets around the house, but I'm pretty friggin' weird when it comes to what I do with electricity. I can't go to Wal-Mart and buy more PoE widgets to use at home, and when people visit they aren't bringing PoE adapters to charge their phones and other electronics. My computer monitor doesn't have a PoE input. I can easily imagine a table lamp or a fan that connects to PoE, and also uses it as a network connection for automation, and that sounds pretty sweet in ways that tickle my automation bones in the most filthy of fashions... but that's getting even further into the weeds compared to how regular people expect to do regular things.
3. There isn't a lot of power available. 802.3bt Type 4 is the highest spec. And within that spec: While switch ports can output up to 100W, a device being powered is limited drawing no more than 71.3W. Now, sure, that's 71.3W per port, but in a room with 10 ports that's still only ~700W -- at most -- in that room. And Grandma's space heater won't run on 71.3W, nor her electric blanket. My laptop wants more than this. The list of useful, portable things that we casually plug into a wall that only draw less than 71.3W is pretty short and most don't benefit from the main advantage of PoE, which is a combination of [some] power alongside high-speed Ethernet data.
4. We still need wall warts since PoE is nominally ~48VDC. For example: Phones use less than 71.3W while charging, but they don't run on 48V. That means 120V AC comes in from the grid, gets shifted to 48VDC for distribution within the dwelling, and then gets shifted yet again to the produce the power (5, 9, 15, and 20V are common-enough in USB PD world) that devices actually want. That's more lossy conversion steps, not fewer -- and we still get to keep the extra conversion (wall warts) as punishment for our great ideas. This is not the path towards increased energy efficiency.
---
PoE is great for the things we use it for today. A camera, a wireless access point -- you know, fixed-location stuff that uses networked data as its primary function and also requires power.
Installed PoE light fixtures (like, say, task lights in a kitchen) also sounds neat -- unless they die prematurely and no PoE replacements are to be found. (Now, you have not just one or two problems, but many: The lights aren't working in that space and they can't be replaced with a trip to Lowes because the Romex that would normally have been installed was deliberately deleted from the plan. It could have been a 20-minute DIY fix that costs less than $100, but now it involves drywall and paint and retrofitting new cabling. Or maybe PoE replacements do exist, but it's now 2035 and the new ones don't talk the same network protocols as the old ones did.)
But there are other upsides: I've got an 8-port PoE-powered network switch that works a treat. It's a dandy little thing. And it sure would be neat to plug my streaming box in with PoE and kill two birds with one cable; I would like that very much.
But most people? Most people don't give a damn about ethernet (PoE, or not!) these days, or streaming boxes, and that trend is increasing. They just plug their lamp into the regular outlet on the wall like they always have, and deal with whatever terrible UI is built into their smart TV, and use wifi for anything that needs data.
And when they buy a home that is filled with someone else's smart infrastucture, their first task (more often than not) is to figure out who to call to erase those parts completely and put it back to being normal and boring.
At home, I put all of my network infrastructure software in one basket because that seems like the right path towards maximizing availability[1]: It provides one point of potential hardware failure instead of many.
For me, that means doing routing, DNS, VPN, and associated stuff with one box running OpenWRT. It works. It's ridiculously stable. And rather than having a number of things that could break the network when they die, I only have 1 thing that can do so.
That box currently happens to be a Raspberry Pi 4 that uses VLANs as Ethernet port expanders, but it is also stable AF with a [shock! horror!] USB NIC. I picked that direction years ago mostly because I have a strong affinity towards avoiding critical moving parts (like cooling fans) in infrastructure.
But those details don't matter. Any single box running OpenWRT, OPNsense, pfSense, Debian, FreeBSD, or whatever, can behave more-or-less similarly.
[1]: Yeah, so about that. If the real-world MTBF for a system that relies upon 1 box is 10 years, then the MTBF for a system relying on 2 boxes to both keep working is only 5 years. Less is more.
Perhaps it seems obvious to some, but it's not obvious to me so I need to ask: What's the advantage of a selectively-available DNS for kids playing Minecraft with Nintendo Switch instead of regular DNS [whether self-hosted or not]?
All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).
And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)
As a potential solution:
You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.
It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.
(Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)
Ok, why would I want to do that? Because when Microsoft bought Minecraft they decided to split the ecosystem into the Java Edition (everyone playing on a computer) and Bedrock Edition (Consoles, Tablets, ...) and cross-play is not possible on the official realms. That leaves out the option to just pay and rent a realm for the group.
So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.
Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.
Whitelisting networks is impossible - it's residential internet.
The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).
Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.
(1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.
Thanks. I suspected that this is where things were heading. I don't see a problem with using hacks-on-hacks to get a thing done with closed systems; one does what one must.
On the DNS end, it seems the constraints are shaped like this:
1. Provides custom responses for arbitrary DNS requests, and resolves regular [global] DNS
2. Works with residential internet
3. Uses no open resolvers (because of amplification attacks)
4. Works with standalone [Internet-connected] Nintendo Switch devices
5. Avoids VPN (because #4 -- Switch doesn't grok VPN)
With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.
The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.
DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.
Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.
Yep, I agree. It's essentially impossible given the contraints. I'm mostly responding to a post that says "just run it on a VPN" with an example that just can't run on a VPN.
(3) would be easy to handle if DNS Cookies were sufficiently well supported because they solve reflection attacks and that's the most prominent. Rate limiting also helps.
At the moment I'm at selectively running the DNS server when the kids want to play because we're still at the supervised pre-planned play-session. And I hope that by the time they plan their own sessions, they've all moved on to a Switch 2.
Thank you for the explanation, it was most interesting, I had no idea Bedrock could be coerced into talking to java servers.
Here are a few ideas:
1. Geoblocking. Not ideal, but it can make your resolver public for fewer people.
2. What if your DNS only answers queries for a single domain? Depending on the system, the fallback DNS server may handle other requests?
3. You could always hand out a device that connects to the WLAN. Think a cheap esp32. Only needs to be powered on when doing the resolution. Then you have a bit more freedom: ipv6 RADV + VPN, or try hijacking DNS queries (will not work with client isolation), or set it as resolver (may need manual config on each LAN, impractical).
4. IP whitelist, but ask them to visit a HTTP server from their LAN if it does not work (the switch has a browser, I think), this will give you the IP to allow, you can even password-protect it.
I'd say 2. Is worth a try. 4. Is easy enough to implement, but not entirely frictionless.
You could run a DNS server and configure the server with a whitelist of allowed IPs on the network level, so connections are dropped before even reaching your DNS service.
For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.
But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.
Correct me if I misunderstand what you're trying to do:
What you want to do is -on each LAN that has a Switch that you want to play on your specific Minecraft server- report that the IP for the hostname of the Minecraft server the Switch would ordinarily connect to is the server that you're hosting?
If you're using OpenWRT, it looks like you can add the relevant entries to '/etc/hosts' on the system and dnsmasq will serve up that name data. [0] I'd be a little shocked (but only a little) if something similar were impossible on all non-OpenWRT consumer-grade routers.
My Switch 1 is more than happy to use the DNS server that DHCP tells it to. I assume the Switch 2 is the same way.
I can do that for my network - but the group is multiple kids that play from their home. I'm not going to teach all of those parents how to mess with their network. There's just way too many things that can go wrong. Also, won't work if the kid is traveling.
From all this what I got is that Microsoft is connecting to some random servers not using TLS and then somehow outputting that data straight into the Nintendo Switch
OS/2 may have been a better Windows than Windows during the Warp days 30-ish years ago. It was also a very competent operating system in its own right.
We all know the story:
It never had a broad base of native applications. It could have happened, but it did not happen. Like, back then when Usenet was the primary way of conducting written online discourse, the best newsreader I had on OS/2 was a Windows program; the ones that ran natively on OS/2 weren't even close.
And OS/2 never had support from a popular company. There were times at OS/2's peak (such as it was) when it was essentially impossible to buy a new computer with OS/2 pre-installed and working correctly even from IBM.
Linux, though? Over those same 30-ish years, a huge amount of native applications have been written. Tons of day-to-day stuff can be done very well in Linux without even a hint of Wine and that's been reality for quite a long time now.
The missing piece, if there is one, is gaming. It'd be great to have more native games and fewer abstraction layers. But systems like Valve's popular Steam Deck and upcoming Steam Machine are positive aspects that OS/2 never had an equivalent to. And since Steam is very nearly ubiquitous, companies that sell computer game software do pay attention to what Valve is doing in this space.
(And frankly, when a game runs great in some Steam/Wine/Proton/Vulkan shapeshifting slime mold abstraction stack, I really do not care that it isn't running natively. I push the button and receive candy.)
That cable has one power input (that is only an input), and two outputs (that are only outputs), and a brainbox in the middle to direct the circus.
If we label the connectors as A, B, and C, then it works like this: A charges B and/or C, and other charging directions are no-op.
The less-complex way is to use a USB A to C cable, if that's appropriate. With these, the A side is always the source and the C side is always the sink.
---
And yeah, it's annoying. I got a cheap lithium car jump starter several years ago with some neat power bank features (like 60W USB PD in/out, on one port). So I plugged it into my phone with USB C at my desk, and discovered that they'd charge eachother seemingly randomly. While changing nothing, I'd look over and sometimes the jump starter would charge the phone, and sometimes the phone would be charging the jump starter. The conglomeration formed a heater, with more steps.
(Back and forth with the same poop, forever.)
reply