belter 3 days ago

Let's not forget the core shakiness and trust-based nature of BGP will be our ultimate safeguard against an AGI takeover.

Lack of authentication, inherent vulnerabilities to route hijacks and leaks, and reliance on mutual trust rather than verification...If AGI tries to assert global dominance, BGP will be the fail-safe 'red button' to prevent an AI apocalypse.

  • Vecr 3 days ago

    Is that actually better than just cutting power to all Internet infrastructure?

  • delfinom 3 days ago

    RKPI/ROA uptake is now at 50% which is pretty significant increase in just 2 years.

    https://radar.cloudflare.com/routing

    Unfortunately BGP will probably be mostly solid in a few more years. At least solid enough among countries with caring ISPs. Prime for the AGI pickings.

    • iscoelho 3 days ago

      RPKI/ROA does not solve BGP hijacking [1]: "Even with RPKI validation enforced, a BGP actor could still impersonate your origin AS and advertise your BGP route through a malicious router configuration."

      It also would not have prevented the bulk of this issue (as the next AS Path was "1031 262504 267613 13335", and would have been accepted by RPKI) so I am confused why Cloudflare claims it to be the solution. Perhaps Cloudflare's own engineers forgot that RPKI is still insecure?

      [1] https://blog.cloudflare.com/rpki-details

magicalhippo 3 days ago

Was hit by this. Took me some time to figure out what was going on, thanks to PiHole caching entries.

Swapped to Quad9[1], been using them since.

Was debating setting up my own recursive resolver, but the privacy aspect seemed less than ideal. Damed if you do, damed if you don't kinda.

[1]: https://www.quad9.net/

  • papascrubs 3 days ago

    Did 1.0.0.1 not work? Do people not use the backup resolvers that providers offer? I didn't even know that there was an outage. Cloudflare is much faster than quad9, at least for my connection. Talking like sub 10ms response vs 30+ for other providers.

    I use Technitium and legacy DNS on my internal network. I have Technitium setup to use DoH to cloudflare for all the requests it makes. Best of both worlds IMO

    • magicalhippo 3 days ago

      I admit I hadn't set up 1.0.0.1 but the secondary IPv6 one as backup. Neither worked.

      I run Pi-hole, haven't noticed any significant difference in speed. But good point, will see if I can find that DNS benchmark tool again.

      • magicalhippo 2 days ago

        Just as a follow-up, using this[1] tool it seems that from my location Quad9 is about as fast as Cloudflare. Both faster than Google. Now, I'm in Norway and Quad9 is in Switzerland, so that might be related.

              1.  1.  1.  1 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
            ----------------+-------+-------+-------+-------+-------+
            - Cached Name   | 0.010 | 0.012 | 0.014 | 0.001 | 100.0 |
            - Uncached Name | 0.010 | 0.046 | 0.249 | 0.056 | 100.0 |
            - DotCom Lookup | 0.009 | 0.014 | 0.021 | 0.002 | 100.0 |
            ---<-------->---+-------+-------+-------+-------+-------+
                               one.one.one.one
                              CLOUDFLARENET, US
        
        
              9.  9.  9.  9 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
            ----------------+-------+-------+-------+-------+-------+
            - Cached Name   | 0.009 | 0.012 | 0.015 | 0.001 | 100.0 |
            - Uncached Name | 0.013 | 0.066 | 0.286 | 0.074 | 100.0 |
            - DotCom Lookup | 0.029 | 0.042 | 0.060 | 0.012 | 100.0 |
            ---<-------->---+-------+-------+-------+-------+-------+
                               dns9.quad9.net
                               QUAD9-AS-1, CH
        
        [1]: https://www.grc.com/dns/benchmark.htm
  • kayson 3 days ago

    What's your privacy concern with a recursive resolver? If you're not using DoH or DoT, your ISP can see everything anyways.

    • magicalhippo 3 days ago

      I live in a country where I'm not overly concerned about my ISP.

      But yeah, it was more that the privacy implications are unclear to me.

  • lsllc 3 days ago

    Same here, except with a EdgeRouter (Ubiquiti) DNS caching.

lopkeny12ko 3 days ago

This article does not explain why AS267613/Eletronet started advertising 1.1.1.1/32.

  • iscoelho 3 days ago

    Considering their 1.1.1.1/32 advertisement was a RTBH (Remotely triggered blackhole), it is likely they advertised the route automatically as a result of DDoS.

    It is common for ISPs to blackhole (nullroute) an IP address that is experiencing DDoS to protect the rest of the network. In this case, they were learning this route from a peer (not a customer), and should not RTBH under any circumstance, but that boils down to a misconfiguration.

maybeben 3 days ago

maybe everyone in the world using the same handful of resolver addresses isn't so great for fault tolerance

  • cryptonector 3 days ago

    DJB was right. We should have used Curve 25519 for encryption in DNS itself. Then we wouldn't have needed DoT nor DoH, and we'd still have had a pretty good measure of privacy relative to ISPs and eavesdroppers.

cangeroo 3 days ago

Could DNS responses have been hijacked as well?

Edit: Could this have been used to hijack/create TLS certificates?

  • georgyo 3 days ago

    Yes, unless you have some sort of protection.

    Protection could be validating DNSSEC (most likely not)

    Or using DoH (DNS over HTTPS) or DoT (DNS over TLS)

    • terom 3 days ago

      I don't think DNSSEC would help in the common case of non-validating stub resolvers querying a public resolver. My understanding is that the DNS query response from a DNSSEC-validating public recursive resolver doesn't contain the information required for the stub client to validate it, only a single AD bit.

  • kevindamm 3 days ago

    Depends, do you have DNSSEC enabled?

    • tptacek 3 days ago

      DNSSEC doesn't help here. It doesn't run between stub resolvers and recursers like 1.1.1.1.

    • mort96 3 days ago

      Probably not, I can't remember the last time I looked at 'resolvectl' output and saw anything other than "DNSSEC: no" on any system so I assume it mostly just doesn't exist in practice

    • nightpool 3 days ago

      More practically: do you have DoH enabled? If you're using Chrome, the answer is probably yes.

aaron695 3 days ago

Does anyone have a link to a blog that explains why I can't have every DNS entry in a database at home?

I don't want a 'you could never get this 1%" because of X straight up. What percentage could I get and why is this not easy?

  • DanAtC 3 days ago

    As others have said, DNS is distributed/hierarchical/delegated. But if you wanted to try...

    First, grab the root zone file which points to all the TLD nameservers: https://www.internic.net/domain/root.zone

    Next, get zone files for the TLDs themselves. Not every TLD makes that freely or easily available, but https://github.com/jschauma/tld-zoneinfo is an excellent resource.

    The author of the above repo runs https://netmeister.org/tldstats/ which will give you an idea of scale; they're able to get 98.89% of all TLDs.

    (Correction: zone files for ccTLDs are mostly unavailable. That 98.89% is only domain count coverage, not zone file coverage.)

    • Avamander 3 days ago

      I guess this alone might speed things up a bit, if cached, especially with DNSSEC. I wonder if anyone has attempted it.

  • djbusby 3 days ago

    Well, the database might be large. Millions of domain names, each with like a dozen (or more) records. (maybe 1k * 30 million domains).

    But mostly it's about how the DNS changes, a lot. Sites move, geo records, etc. Getting those changes to every household would be a lot of traffic.

    Before DNS folk were sharing a common /etc/hosts file - and quickly determined there had to be a better way.

    So, DNS with a dozen root servers that stay fresh, caching down stream.

    • ranger_danger 3 days ago

      Also due to Anycast, DNS servers can/do respond with different addresses depending on who requests it and from where.

    • fragmede 3 days ago

      A 30 GiB file is nothing these days.

    • tlb 3 days ago

      Large for the old days, but a billion names is no longer a big file compared to a AAA game.

  • thedougd 3 days ago

    1) It's distributed. There's not a single database with all the DNS records in existence. Each zone is responsible for hosting and serving their own records.

    2) The TTL for a DNS record can be 0.

    Edit:

    3) AXFR is blocked on 'most' zones. You would need this to know all the records in a zone.

    • Avamander 3 days ago

      Zero TTL is insane though and most caching resolvers allow enforcing a minimum.

      • ectospheno 3 days ago

        Last year I started bumping up min ttl on dns caches a bit each week. Stopped at one hour. No one ever complained.

  • myrmidon 3 days ago

    Simply because those entries are not static.

    When Jane Deer creates a new Blog jane-deer.com, your local database would not know which server to contact to resolve it, and nobody is gonna notify the database in your home.

    So you would have to contact some other kind of database outside your home to find out how to resolve that domain--which you might have already realized sounds a lot like a (non-local) DNS server...

  • georgyo 3 days ago

    There are too many reasons to list. But the primary reasons DNS is a hierarchy of DNS services hosted by many distinct groups. And DNS records are queriable publicly, but also consider private. Almost all servers in the world disable AFRX for a variety reasons.

    Beyond that DNS servers are not just a "database" of records, they are services that can return different results depending on who queried. And this is quite common with CDNs.

    There are many more reasons, but let's talk about what you could do.

    You could download a list of all registered domains, and then query all of them for the most common records. It would take hours, not include 99.9 percent all records, and be out of date the second it completes. With this database, you could visit some websites, but many host services on subdomains and you have no way to dynamically get that list when you're populating the database.

  • chabad360 3 days ago

    The gist is because DNS gets updated. Your computer already has a local cache of all DNS responses it receives and relies on those until that entry expires (as set by the TTL field). If you want to run a local resolver, you can, but it wouldn't solve this problem, because you'll still need to update your cache regularly and guess how you do that? With DNS.

    The next question from there might be why do DNS responses change so frequently? I don't have an exact statistic, but most of them don't (I would guess 90%), so you could probably indefinitely cache those. But the ones that do, iirc, it's usually to allow quicker changes (esp. if the site is under active development), load balancing, global distribution, etc.

    I hope that's helpful.

  • Bjartr 3 days ago

    How would you update that database? Who would you ask for the updates?

    IPs change out from under domains constantly. How would knowledge of those changes make their way from the owner of the domain to your local database?

    If you'd want a single source of truth to receive and distribute that info, that's a single point of failure and comes with lots of technical challenges. And it's less "you could never get this 1%" and more "you will never need 99.9% of this"

    ------------------------------------------------

    However, that's all entirely unrelated to this Cloudflare incident, which involved no domains at all, only raw IP addresses. This was an incident involving BGP, which is how the various providers of internet backbone know which other provider they connect to is able to answer requests made to a given IP address, and how changes to that information propagate.

  • patmorgan23 3 days ago

    Because you don't own the data and it changes. Also lots of people do DNS bases load balancing.