Connectivity

Connectivity

Didn't find what you were looking for?


We have advanced search options to make it easier to locate posts, questions and answers on this community.
More information can be found at Advanced Search Options
If you are looking for something specific, please check if someone else has already asked or answered the same question.

Peering with AT&T and long trips through Charter's network

Posts: 10 Spectator
edited April 14 in Connectivity

I have Spectrum 1G/40M DOCSIS 3.1 at the moment at one premises (MI), and AT&T fiber 2G at the other (CA).

What I am finding is that essentially "last mile" speeds on each end are fine, but the actual throughput between the two is pretty miserable. I'm doing cross-premesis backups over VPN, so I don't particularly care about the latency, but I'd expect to get better than 150 Mb/sec, at least in the one direction that isn't across the 40MB upload over DOCSIS.

From Spectrum to AT&T:

2  syn-072-031-150-017.inf.spectrum.com (72.31.150.17)  17.387 ms  9.379 ms  9.984 ms

 3  lag-60.hcr02fmhlmiof.netops.charter.com (72.31.205.62)  10.190 ms  10.984 ms  12.018 ms

 4  lag-22.detr01-cbr2.netops.charter.com (71.46.180.76)  17.016 ms  18.227 ms  17.990 ms

 5  lag-100.detr01-cbr1.netops.charter.com (72.31.205.107)  19.965 ms  18.414 ms  14.879 ms

 6  * lag-110-10.chcgildt87w-bcr00.netops.charter.com (24.27.236.0)  47.398 ms *

 7  lag-0.pr2.chi10.netops.charter.com (66.109.5.225)  19.778 ms  24.023 ms

    lag-401.pr2.chi10.netops.charter.com (66.109.0.109)  28.006 ms

 8  syn-024-030-201-070.inf.spectrum.com (24.30.201.70)  24.284 ms  21.314 ms  23.838 ms

 9  * * *
There the traceroute just dies for no reason I can discern.

From AT&T to Spectrum:

 2  162-231-240-1.lightspeed.sntcca.sbcglobal.net (162.231.240.1)  3.459 ms  5.060 ms  2.209 ms

 3  71.148.149.126 (71.148.149.126)  2.070 ms  4.046 ms  2.897 ms

 4  * * *

 5  * 32.130.91.81 (32.130.91.81)  3.744 ms  3.993 ms

 6  lag-1107.pr1.sjc10.netops.charter.com (24.30.200.141)  6.928 ms  5.242 ms  7.429 ms

 7  * lag-13.snjucacl67w-bcr00.netops.charter.com (66.109.5.132)  52.075 ms  54.173 ms

 8  * * *

 9  lag-14.chcgildt87w-bcr00.netops.charter.com (66.109.6.15)  51.971 ms  51.338 ms  53.850 ms

10  lag-10-10.detr01-cbr1.netops.charter.com (24.27.236.1)  56.142 ms  56.210 ms  57.497 ms

11  lag-100.detr01-cbr2.netops.charter.com (72.31.205.106)  58.898 ms  61.558 ms  59.039 ms

12  lag-1.hcr02fmhlmiof.netops.charter.com (71.46.180.77)  60.936 ms  59.545 ms  59.214 ms

13  syn-072-031-205-063.inf.spectrum.com (72.31.205.63)  59.249 ms  59.743 ms  60.198 ms
And the traceroute dies there.

But what the second traceroute shows is that getting from AT&T to Charter happens pretty darned quickly. But the journey through Charter to the other end seems pretty torturous.

Welcome!

It looks like you're new here. Sign in or register to get started.

Comments

  • Posts: 1,010 Contributor
    edited April 14

    Just curious if you could clarify what you mean by "actual throughput" and by what means & measure you're seeing "150 Mb/sec."

  • Posts: 5,261 ✅ Verified Employee Moderator
    Answer ✓

    @nsayer

    Please check out Ping and Traceroute FAQ.

    If you still have questions, let us know.

  • Posts: 234 Contributor
    edited April 14

    Guessing your backup is running high compression? Likely not a constant stream of data, but coming periodically in small chunks and getting those weird sort of RTS/CTS handshakes each cycle.

    If so, may be better off backing up to a scratch disk, then pushing the final backup afterwards, or otherwise use a drive that's set to synch automatically.

  • Posts: 10 Spectator

    @HT_Greenfield 150 Mb/sec or so is what I get from iperf in the CA → MI direction (because the other way is going to be limited by the 40M upload speed).

  • Posts: 10 Spectator

    @raist5150 The actual backup is Apple Time Machine, so I do not believe it's compressed, and it's just SMB. That said, that's not how I'm measuring the throughput. I'm using iperf for that.

  • Posts: 1,010 Contributor
    edited April 15

    Heard that. No idea how iPerf imputes "bandwidth" in Mbits/sec from transfer rate in MBytes/sec but no reason to doubt that it does it well. What kind of "bandwidth" according to iPerf3 do you get from the iperf.he.net Hurricane Electric public iPerf3 server in Fremont to your host in Michigan versus to your host in California both with and without VPN?

  • Posts: 10 Spectator
    edited April 19

    Well, testing against iperf.he.net would by necessity not use the VPN, since the VPN is just a tunnel between the two premises.

    I had been using just plain iperf. It showed poor results from CA to HE, so I installed iperf3 and got 2.15 Gb/sec - so almost the full pipe.

    Going from MI to HE, I get 30 MB, but that's because it's throttled by the cable modem upload there. Using -R, I get about 400 Mb/sec.

    All that said, iperf3 to HE seems to be preferring IPv6. The VPN is ipv4 both inside and out. Even so, the bandwidth from HE to MI is less than half the cable modem wire speed in that direction (albeit being 2000 miles instead of 50).

    I repeated the test from CA to MI with iperf3 and got the same results - ~180 Mb/sec.

  • Posts: 10 Spectator
    edited April 19

    I wondered if this was a v4 vs v6 issue, so I temporarily made a firewall rule to expose iperf3 from the Michigan side with both v4 and v6 and tested outside the VPN. I got the same result - around 450 Mb/sec with both protocols, but still about 185 Mb/sec through the VPN.

    So clearly the VPN is having a pretty major impact. It's wireshark running on an Asus BQ16 on both ends. Now, the BQ16 is kinda a heavyweight, so I'd be a bit surprised if wireshark on it would affect it so much, but I don't have a whole lot else to hang it on, I suppose, except for wondering where the other 600 Mb/sec is going on the trip from California to Michigan.

  • Posts: 10 Spectator
    edited April 21

    I tried an experiment where I moved the VPN functionality off the router and on to a pair of Raspberry Pi 5s. That didn't change anything. So there's something about the traffic being carried over wireguard itself that is reducing the throughput. Unless Spectrum is treating TCP differently than UDP, I can't imagine what's going on.

    (also, wireguard, not wireshark).

  • Posts: 234 Contributor
    edited April 21

    Just out of curiosity, how much is the VPN reducing your usable portion of the MTU? Could be at least a contributing factor if you are monitoring the user data flow and not the actual packet flow.

    ie: 1380 bytes per packet of user data in a 1518 byte frame (1500 MTU + 18 for ethernet), vs 1460 of user data per packet.

    Still moving 1518 bytes per frame, but moving less of your user data per packet because of the extra overhead for encapsulation. Something that gets overlooked often.

  • Posts: 10 Spectator

    Wireguard does reduce the MTU, but you'd expect TCP Path MTU discovery to work that out.

  • Posts: 234 Contributor
    edited April 23

    Not sure the thought was conveyed well...

    What I meant is you may in fact be moving more data than realized because the method of tracking may not be accounting for the overhead, making it appear worse than it actually is.

    For example... could still be moving 80,000 plus packets (over 970mbits), but it may only be carrying 883mbits of user data (versus 934mbits without the VPN overhead).

    That stacks on top of the potential slowdowns because of the difference in the VPN traffic flow (increased latency... different routes... server delays vs routers... etc.).

  • Posts: 10 Spectator

    Ok, I can see what you're saying. The difference between using the VPN and not is more like a 60% reduction in throughput. I can't quite see how VPN overhead would result in that. Plus, the way I'm measuring with iperf I'm just blasting data over TCP, and TCP ought to be able to discover the path MTU and accommodate it optimally.

  • Posts: 10 Spectator

    If anything I could see this being a result of Charter prioritizing TCP over UDP (which is how Wireguard sends packets).

  • Posts: 234 Contributor

    Well, the problem is two fold.

    The extra overhead can cause anywhere from around 5 to 10% less user data per packet. This isn't a slowdown of your connection per se, but it FEELS like it because it requires more 1518 byte frames to send the same amount of user data. You can simulate it by simply dropping the MTU of a normal connection... smaller slices of user data consumes more packets/time on the wire to send the same sized user file.

    Then you have the proxy to someone else's gateway—effectively you are jumping over to someone else's network connection and are now subject to their routing and shaping policies. This also typically involves going through a server where it might normally be a router, which may inject considerable latency at that one junction. Couple this with the extra processing overhead for encrypting/decrypting the data and the exchange to another ISP before reaching the destination (which may have more peering constraints or other issues).

    It all adds up to potentially longer routes, more latency, more queuing... multiple speedbumps that can cause it to take longer to move the same amount of data otherwise.

  • Posts: 10 Spectator

    Ok, but 5-10% isn't 60%.

    What this feels like is that Spectrum's network just sort of sucks for the paths that I am trying to use. Spectrum almost certainly has optimized their network for people streaming video or playing games or what not, rather than bulk data transfer from other consumer ISPs. I can kind of see this in using iperf3 to test against other public iperf3 servers. The AT&T connection in CA just in general gets results 3-5 times better than Spectrum (and, again, I am avoiding the uplink path from Michigan because I already know it's pinched to 30 MB).

    Where this becomes interesting is that Farmington Hills, MI is running a project to string up XG-PON fiber throughout the city. At the same time, Spectrum trucks are roaming around deploying mid-split amplifiers for DOCSIS 4. Ordinarily, I'd be happy to go with whoever wins the race, but these last-mile upgrades don't fill me with a lot of confidence that my most important use case is going to get any better if I stick with Spectrum.

  • Posts: 234 Contributor
    edited April 30

    Kinda the curse of using a VPN though... you may be circumventing your ISP's BGP policies from your endpoint because of how the proxying through the VPNmay be kicking in.

    For example, I usually go up through NC/VA before going westward. I switch to hit Atlanta instead, and I may get a better or worse pathway mapped put out because my next hops will be extending from Atlanta and Austin instead of Raleigh and Herndon (and it may be a different ISP's BGP lookup at that, so multiple reasons potentially different metrics may be used)

Welcome!

It looks like you're new here. Sign in or register to get started.