NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Cosmologically Unique IDs (jasonfantl.com)
lisper 1 hours ago [-]
This analysis is not quite fair. It takes into account locality (i.e. the speed of light) when designing UUID schemes but not when computing the odds of a collision. Collisions only matter if the colliding UUIDs actually come into causal contact with each other after being generated. So just as you have to take locality into account when designing UUID trees, you also have to take it into account when computing the odds of an actual local collision. So a naive application of the birthday paradox is not applicable because that ignores locality. So an actual fair calculation of the required size of a random UUID is going to be a lot smaller than the ~800 bits the article comes up with. I haven't done the math, but I'd be surprised if the actual answer is more than 256 bits.

(Gotta say here that I love HN. It's one of the very few places where a comment that geeky and pedantic can nonetheless be on point. :-)

u1hcw9nx 26 minutes ago [-]
You must consider both time and locality.

From now until protons decay and matter does not exist anymore is only 10^56 nanoseconds.

rbanffy 20 minutes ago [-]
If we think of the many worlds interpretation, how many universes will we be making every time we assign a CCUID to something?
antonvs 5 minutes ago [-]
We don't "make" universes in the MWI. The universal wavefunction evolves to include all reachable quantum states. It's deterministic, because it encompasses all allowed possibilities.
Etheryte 16 minutes ago [-]
That's such an odd way to use units. Why would you do 10^56 * 10^-9 seconds?
Sharlin 14 minutes ago [-]
If protons decay. There isn't really any reason to believe they're not stable.
scotty79 16 minutes ago [-]
Proton decay is hypothetical.
rubyn00bie 18 minutes ago [-]
I got a big laugh at the “only” part of that. I do have a sincere question about that number though, isn’t time relative? How would we know that number to be true or consistent? My incredibly naive assumption would be that with less matter time moves faster sort of accelerating; so, as matter “evaporates” the process accelerates and converges on that number (or close it)?
rini17 13 minutes ago [-]
From real life we know that people prefer to have multiple anonymous IDs, or self-selected handles, either makes fully deterministic generation schemes moot.

Also, network routing requires objects that have multiple addresses.

Physics side of whole thing is funny too, afaik quantum particles require fungibility, i.e. by doxxing atoms you unavoidably change the behavior of the system.

ekipan 19 minutes ago [-]
I forget the context but the other day I also learned about Snowflake IDs [1] that are apparently used by Twitter, Discord, Instagram, and Mastodon.

Timestamp + random seems like it could be a good tradeoff to reduce the ID sizes and still get reasonable characteristics, I'm surprised the article didn't explore there (but then again "timestamps" are a lot more nebulous at universal scale I suppose). Just spitballing here but I wonder if it would be worthwhile to reclaim ten bits of the Snowflake timestamp and use the low 32 bits for a random number. Four billion IDs for each second.

There's a Tom Scott video [2] that describes Youtube video IDs as 11-digit base-64 random numbers, but I don't see any official documentation about that. At the end he says how many IDs are available but I don't think he considers collisions via the birthday paradox.

[1]: https://en.wikipedia.org/wiki/Snowflake_ID

[2]: https://youtu.be/gocwRvLhDf8

bluecoconut 1 hours ago [-]
Fun read.

One upside of the deterministic schemes is they include provenance/lineage. Can literally "trace up" the path the history back to the original ID giver.

Kinda has me curious about how much information is required to represent any arbitrary provenance tree/graph on a network of N-nodes/objects (entirely via the self-described ID)?

(thinking in the comment: I guess if worst case linear chain, and you assume that the information of the full provenance should be accessible by the id, that scales as O(N x id_size), so its quite bad. But, assuming "best case" (that any node is expected to be log(N) steps from root, depth of log(N)) feels like global_id_size = log(N) x local_id_size is roughly the optimal limit? so effectively the size of the global_id grows as log(N)^2? Would that mean: from the 399 bit number, with lineage, would be a lower limit for a global_id_size be like (400 bit)^2 ~= 20 kB (because of carrying the ordered-local-id provenance information, and not relative to local shared knowledge)

montyanne 21 minutes ago [-]
The ATProto underlying BlueSky social network is similar. It uses a content-addressed DAG.

Each “post” has a CID, which is a cryptographic hash of the data. To “prove” ownership of the post, there’s a witness hash that is sent that can be proved all the way up the tree to the repo root hash, which is signed with the root key.

Neat way of having data say “here’s the data, and if you care to verify it, here’s an MST”.

adityaathalye 1 hours ago [-]
Just past page 281 of Becky Chambers's delightful "the galaxy, and the ground within".

  Received Message
  Encryption: 0
  From: GC Transit Authority --- Gora System (path: 487-45411-479-4)
  To: Ooli Oht Ouloo (path: 5787-598-66)
  Subject: URGENT UPDATE
Man I love the series.

Looks like this multispecies universe has centrally-agreed-upon path addressing system.

pavel_lishin 29 minutes ago [-]
You should check out Vernor Vinge's A Fire Upon The Deep for more fun examples of how intra-galactic communication would be labeled, with routes & such.
Octoth0rpe 1 hours ago [-]
From this book in particular, I love the scene with everyone sitting around talking about how horrifying the concept of cheese is. The rest of the quartet is wonderful, with the second book (A Closed and Common Orbit) being the MVP IMO.
j-pb 1 hours ago [-]
Great insights and visualisations!

I build a whole database around the idea of using the smallest plausible random identifiers, because that seems to be the only "golden disk" we have for universal communication, except for maybe some convergence property of latent spaces with large enough embodied foundation models.

It's weird that they are really under appreciated in the scientific data management and library science community, and many issues that require large organisations at the moment could just have been better identifiers.

To me the ship of Theseus question is about extrinsic (random / named) identifiers vs. intrinsic (hash / embedding) identifiers.

https://triblespace.github.io/triblespace-rs/deep-dive/ident...

https://triblespace.github.io/triblespace-rs/deep-dive/tribl...

manofmanysmiles 38 minutes ago [-]
I'd propose using our current view of physical reality to own a subset of the UIID + version field if new physics is discovered.

10-20 bits: version/epoch

10-20 bits: cosmic region

40 bits: galaxy ID

40 bits: stellar/planetary address

64 bits: local timestamp

This avoids the potentially pathological long chain of provenance, and also encodes coordinates into it.

Every billion years or so it probably makes sense to re-partion.

rbanffy 18 minutes ago [-]
As for coordinates, don’t forget galaxies are clouds of stars flowing around and interacting with each other.
dylan604 55 seconds ago [-]
That's the problem with address type of systems is that they expect the object at that location to always be at that location. How do you encode the orbital speed, radius of orbit for not just the object, but also the object it is orbiting will need the same info as it is also in motion, then that object's parent galaxy's motion. Ugh, now I need a nap to calm down a bit.
alex_tech92 36 minutes ago [-]
It is interesting how much of our infrastructure relies on the assumption that 'close enough' is actually 'good enough' for uniqueness. When we move from UUIDs to things like ULIDs or Snowflake IDs, we are really just trading off coordination cost for a slightly higher collision risk that we will likely never hit in several lifetimes. Thinking about it on a 'cosmological' scale makes you realize how much of a luxury local generation is without needing a central authority. It is that tiny bit of entropy that keeps the whole distributed system from grinding to a halt.
ktpsns 1 hours ago [-]
Quite offtopic, but: I found UUIDs being overused in many cases. People then abused them to store data, making them effectively "speaking IDs" or "multi column indices".
jmole 40 minutes ago [-]
Unless it's a key that needs to be sortable (e.g. insertion order) or a metric/descriptor of some kind, I'm not sure why UUID would be overused or inappropriate for use.
factotvm 34 minutes ago [-]
> In order to fix this, we might start sending out satellites in every direction

Minor correction: Satellites don't go in every direction; they orbit. Probes or spaceships are more appropriate terms.

fluoridation 26 minutes ago [-]
Maybe they meant at every inclination. ;)
40 minutes ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:34:25 GMT+0000 (Coordinated Universal Time) with Vercel.