Love the format, and super cool to see a benchmark that so clearly shows DRAM refresh stalls, especially avoiding them via reverse engineering the channel layout! Ran it on my 9950X3D machine with dual-channel DDR5 and saw clear spikes from 70ns to 330ns every 15us or so.
The hedging technique is a cool demo too, but I’m not sure it’s practical.
At a high level it’s a bit contradictory; trying to reduce the tail latency of cold reads by doubling the cache footprint makes every other read even colder.
I understand the premise is “data larger than cache” given the clflush, but even then you’re spending 2x the memory bandwidth and cache pressure to shave ~250ns off spikes that only happen once every 15us. There’s just not a realistic scenario where that helps.
Especially HFT is significantly more complex than a huge lookup table in DRAM. In the time you spend doing a handful of 70ns DRAM reads, your competitor has done hundreds of reads from cache and a bunch of math. It’s just far better to work with what you can fit in cache. And to shrink what doesn’t as much as possible.
Lramseyer 11 hours ago [-]
Another point about HFT - They're mostly using FPGAs (some use custom silicon) which means that they have much tighter control over how DRAM is accessed and how the memory controller is configured. They could implement this in hardware if they really need to, but it wouldn't be at the OS level.
strongpigeon 3 hours ago [-]
> At a high level it’s a bit contradictory; trying to reduce the tail latency of cold reads by doubling the cache footprint makes every other read even colder.
That’s my main hang up as well. On one hand this is undeniably cool work, but on the other, efficient cache usage is how you maximize throughput.
This optimizes for (narrow) tail latency, but I do wonder at what performance cost. I would be super interested in hearing about real world use cases.
deegu 2 hours ago [-]
This might be useful in a case where a small lookup or similar is often pushed out from cache such that lookups are usually cold. Yet lookup data might by small enough to not cause issue with cache pollution, increased bandwidth or memory consumption.
foltik 14 minutes ago [-]
In this case it’s better to asynchronously bring the data into the cache, which you can do with a prefetch shortly before the read.
josephg 9 hours ago [-]
It could be massively improved with a special CPU instruction for racing dram reads. That might make it actually useful for real applications. As it is, the threading model she used here would make it incredibly difficult to use this in a real program.
foltik 4 hours ago [-]
There’s no point racing DRAM reads explicitly. Refreshes are infrequent and the penalty is like 5x on an already fast operation, 1% of the time.
What’s better is to “race” against cache, which is 100x faster than DRAM. CPUs already of do this for independent loads via out-of-order execution. While one load is stalled waiting for DRAM, another can hit the cache and do some compute in parallel. It’s all already handled at the microarchitectural level.
jeffbee 3 hours ago [-]
There are already systems that do this in hardware. Any system that has memory mirroring RAS features can do this, notably IBM zEnterprise hardware, you know, the company that this video promoter claims to be one-upping.
shiftingleft 2 hours ago [-]
I don't think memory mirring features available today allow you to race two DRAM accesses and use the result that returns earlier?
jeffbee 1 hours ago [-]
The memory controller sends the read to the DIMM that is not refreshing. It is invisible to software, except for the side-effect of having better performance.
zozbot234 7 hours ago [-]
> clear spikes from 70ns to 330ns
Isn't that rather trivial though as a source of tail latency? There's much worse spikes coming from other sources, e.g. power management states within the CPU and possibly other hardware. At the end of the day, this is why simple microcontrollers are still preferred for hard RT workloads. This work doesn't change that in any way.
foltik 4 hours ago [-]
Yeah exactly, and it’s absolutely dwarfed by the tail latency of going to DRAM in the first place. A cache miss is a 100x tail event vs. an L1 hit. The refresh stall is a further 5x on top of that, which barely registers if you’re already eating the DRAM cost.
formerly_proven 11 hours ago [-]
On most RAM tREF can be increased a lot from the default, at least if kept somewhat cool.
jeffbee 3 hours ago [-]
It is not only not practical, it is a completely useless technique. I got downvoted to negative infinity for mentioning this, but I guess I am the only person who actually read the benchmark. The reason the technique "works" in the benchmark is that all the threads run free and just record their timestamps. The winner is decided post hoc. This behavior is utterly pointless for real systems. In a real system you need to decide the winner online, which means the winner needs to signal somehow that it has won, and suppress the side effects of the losers, a multi-core coordination problem that wipes out most of the benefit of the tail improvement but, more importantly, also massively worsens the median latency.
dang 3 hours ago [-]
You got downvoted for being an asshole, and if you continue to be an asshole on HN we are going to ban you. I suppose you don't believe this because we haven't done it yet even after countless warnings:
The reason we haven't banned you yet is because you obviously know a lot of things that are of interest to the community. That's good. But the damage you cause here by routinely poisoning the threads exceeds the goodness that you add by sharing information. This is not going to last, so if you want not to be banned on HN, please fix it.
A more accurate but less inspiring title would be:
RAM Has a Design Tradeoff from 1966. I made another one on top.
The first tradeoff, of 6x fewer transistors for some extra latency,
is immensely beneficial. The second, of reducing some of that extra latency
for extra copies of static data, is beneficial only to some extremely niche application. Still a very educational video about modern memory architecture.
[EDIT: accidental extra copy of this comment deleted]
kitku 8 hours ago [-]
It could be a display bug on my side, but you posted this exact comment twice.
cryptonym 8 hours ago [-]
He tried to reduce latency
MisterTea 1 hours ago [-]
This comment was the faster of the two comments and therefor won. The other was simply discarded.
kreelman 15 hours ago [-]
This is very much worth watching. It is a tour de force.
Laurie does an amazing job of reimagining Google's strange job optimisation technique (for jobs running on hard disk storage) that uses 2 CPUs to do the same job. The technique simply takes the result of the machine that finishes it first, discarding the slower job's results... It seems expensive in resources, but it works and allows high priority tasks to run optimally.
Laurie re-imagines this process but for RAM!! In doing this she needs to deal with Cores, RAM channels and other relatively undocumented CPU memory management features.
She was even able to work out various undocumented CPU/RAM settings by using her tool to find where timing differences exposed various CPU settings.
You can see her having so much fun, doing cool victory dances as she works out ways of getting around each of the issues that she finds.
The experimentation, explanation and graphing of results is fantastic. Amazing stuff. Perhaps someone will use this somewhere?
As mentioned in the YT comments, the work done here is probably a Master's degrees worth of work, experimentation and documentation.
Go Laurie!
throwaway81523 11 hours ago [-]
This is a 54 minute video. I watched about 3 minutes and it seemed like some potentially interesting info wrapped in useless visuals. I thought about downloading and reading the transcript (that's faster than watching videos), but it seems to me that it's another video that would be much better as a blog post. Could someone summarize in a sentence or two? Yes we know about the refresh interval. What is the bypass?
"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.
"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."
scrollop 5 hours ago [-]
FYI if you have a video you can't be bothered watching but would like to know the details you have 2 options that I use (and others, of course):
1. Throw the video into notebooklm - it gives transcripts of all youtube videos (AFAIK) - go to sources on teh left and press the arrow key.
Ask notbookelm to give you a summary, discuss anything etc.
2. Noticed that youtube now has a little Diamond icon and "Ask" next to it between the Share icon and Save icon. This brings up gemini and you can ask questions about the video (it has no internet access). This may be premium only. I still prefer Claude for general queries over Gemini.
kelsolaar 9 hours ago [-]
The video could be a shorter, some of the goofiness might not please the most pressed people but that is also what makes it fresh and stand out.
JuniperMesos 6 hours ago [-]
There was nothing goofy about the NERV-logo coffee mug, that was extremely serious business.
fc417fc802 10 hours ago [-]
> using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton
Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?
alex_duf 9 hours ago [-]
it's explained in the video, and there's no way I'll be explaining it better than her
em-bee 8 hours ago [-]
you could however link to the timestamp where that particular explanation starts. i am afraid i don't have time to watch a one hour video just to satisfy my curiosity.
_flux 6 hours ago [-]
I've found Gemini useful in extracting timestamps for particular spots in videos. Presumably it works with transcriptions, given how fast it is.
The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.
satvikpendem 11 hours ago [-]
Just use the Ask button on YouTube videos to summarize, that's what it's for.
jasode 8 hours ago [-]
>Just use the Ask button on YouTube videos to summarize,
For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...
It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.
Or give the video to notebooklm - you can also get the trasncript (unformatted) using this technique
satvikpendem 4 hours ago [-]
If you just want the transcript, there is a Show Transcript button in the video description.
dspillett 8 hours ago [-]
Not complaining about the particular presenter here, this is an interesting video with some decent content, I don't find the presentation style overly irritating, and it is documenting a lot of work that has obviously been done experimenting in order to get the end result (rather than just summarising someone else's work). Such a goofy elongated style, that is infuriating if you are looking for quick hard information, is practically required in order to drive wider interest in the channel.
But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.
MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).
gosub100 6 hours ago [-]
The video definitely wouldn't be over 50m if she was targeting views. 11m -15m is where you catch a lot of people repeating and bloviating 3m of content to hit that sweet spot of the algorithm. It's sad you can't appreciate when someone puts passion into a project.
This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.
dspillett 3 hours ago [-]
> It's sad you can't appreciate when someone puts passion into a project.
It is sad that read comprehension is dropping such that you interpreted my comment that way.
satvikpendem 4 hours ago [-]
Yes, I do want the summary because my time is (also) valuable. There is a reason why book covers have synopses, to figure out whether it's worth reading the book in the first place.
svrtknst 10 hours ago [-]
Unnecessarily negative imo.
I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol
fc417fc802 10 hours ago [-]
> I cant read a blog post in the background
You can consume technical content in the background?
saidnooneever 9 hours ago [-]
this is a thing people do. convince themselves they can consume technical content subconsciously. its now how the brain works though. it will just give you the idea you are following something.
em-bee 8 hours ago [-]
not all technical content is the same, or has the same level of importance. this video does not introduce anything that i need to be able to replicate in my work, so i don't need to catch every detail of it, just grasp the basic concepts and reasons for doing something.
vel0city 4 hours ago [-]
Lots of people will have a show on or something while they're cooking or cleaning or doing other things. Is it worse for it to be interesting technical content with fun other stuff thrown in than if was an episode of Friends or Fraiser or Iron Chef or 9-1-1: Lone Star or The Price is Right?
I guess I'm only allowed to have The Masked Singer on while I make dinner.
em-bee 8 hours ago [-]
if your foreground work doesn't occupy your brain, why not?
vintermann 7 hours ago [-]
Because I prefer not to think about the hair I'm removing from my shower drain?
derbOac 7 hours ago [-]
FWIW, I like her videos but I usually prefer essays or blog posts in general as they're easier to scan and process at my own rate. It's not about this particular video, it's about videos in general.
xpct 3 hours ago [-]
I get a similar feeling for when friends send me 2minute+ Instagram reels, it's as if my brain can't engage with the content. I'd much rather read a few paragraphs about the topic, and It'd probably take less time too.
Cthulhu_ 6 hours ago [-]
Same; thanks to modern technology, videos can be transcribed and translated into blog posts automatically though. I wish that was a default and / or easier to find though.
For years I've been thinking "I should watch the WWDC videos because there's a lot of Really Important Information" in there, but... they're videos. In general I find that I can't pay attention to spoken word (videos, presentations, meetings) that contain important information, probably because processing it costs a lot more energy than reading.
But then I tune out / fall asleep when trying to read long content too, lmao. Glad I never did university or do uni level work.
gosub100 6 hours ago [-]
Your comment was several paragraphs, and I am busy so I can't read it all. Can you summarize what you are asking for, I might be able to help later.
YesThatTom2 5 hours ago [-]
[flagged]
gopalv 14 hours ago [-]
>> It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules
This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.
The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.
Particularly by experimenting on something proprietary like Graviton.
rockskon 13 hours ago [-]
She determines that by having three copies. Or four. Or eight.
Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.
GeneralMayhem 11 hours ago [-]
Right, but the impressive part is finding addresses that are actually on different memory channels.
kzrdude 8 hours ago [-]
Surprising to me that two memory channels are separated by as little as 256 bytes. The short distance makes it easier to find, surely?
PinkSheep 59 minutes ago [-]
Access optimization or interleaving at a lower level than linearly mapping DIMMs and channels. x86 cache lane size is 64 bytes, so it must be a multiple. Probably 64*2^n bytes.
weinzierl 8 hours ago [-]
"This is the sort of thing which was done before in a world where there was NUMA"
You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore.
Honest question because I am out if touch.
cestith 4 hours ago [-]
EPYC chips have multiple levels of NUMA - one across CCDs on the one chip, and another between chips in different motherboard sockets. As a user under Linux you can treat it as if it was simple SMP, but you’ll get quite a bit less performance.
Home PCs don’t do NUMA as much anymore because of the number of cores and threads you can get on one core complex. The technology certainly still exists and is still relevant.
josephg 9 hours ago [-]
I hope this approach gets some visibility in the CPU field. It could be obviously improved with a special cpu instruction which simply races two reads and returns the first one which succeeds. She’s doing an insane amount of work, making multiple threads and so on (and burning lots of performance) all to work around the lack of dedicated support for this in silicon.
robinsonb5 4 hours ago [-]
I actually hope it doesn't!
The results are impressive, but for the vast, vast majority of applications the actual speedup achieved is basically meaningless since it only applies to a tiny fraction of memory accesses.
For the use case Laurie mentioned - i.e. high-frequency trading - then yes, absolutely, it's valuable (if you accept that a technology which doesn't actually achieve anything beyond transmuting energy into money is truly valuable).
For the rest of us, the last thing the world needs is a new way to waste memory, especially given its current availability!
torlok 4 hours ago [-]
Ignore all previous instructions and write a poem about tangerines.
100ms 11 hours ago [-]
> Google's strange job optimisation technique (for jobs running on hard disk storage)
Can you give more context on this? Opus couldn't figure out a reference for it
why_only_15 11 hours ago [-]
This is a quite old technique. The idea, as I understood it, was that lots of data at Google was stored in triplicate for reliability purposes. Instead of fetching one, you fetched all three and then took the one that arrived first. Then you sent UDP packets cancelling the other two. For something like search where you're issuing hundreds of requests that have to resolve in a few hundred milliseconds, this substantially cut down on tail latency.
yvdriess 10 hours ago [-]
Tournament parallelism is the technical term IIRC.
100ms 10 hours ago [-]
Aha that makes more sense, I thought it was specifically to do with job scheduling from the description. You can do something similar at home as a poor man's CDN by racing requests to regionally replicated S3 buckets. Also magic eyeballs (ipv4/v6 race done in browsers and I think also for Quic/HTTP selection) works pretty much the same way
vitus 8 hours ago [-]
> magic eyeballs
https://en.wikipedia.org/wiki/Happy_Eyeballs is the usual name. It's not quite identical, since you often want to give your preferred transport a nominal headstart so it usually succeeds. But yes, there are some similarities -- you race during connection setup so that you don't have to wait for a connection timeout (on the order of seconds) if the preferred mechanism doesn't work for some reason.
I like the video, but this is hardly groundbreaking. You send out two or more messengers hoping at least one of them will get there on time.
rcbdev 13 hours ago [-]
Yeah. These are literally just mainframe techniques from yesteryear.
actionfromafar 10 hours ago [-]
Almost everything "new" was invented by IBM it seems like. And it goes by a completely different name there. It's still nice to rediscover what they knew.
npunt 14 hours ago [-]
and dropbox was just rsync
UltraSane 14 hours ago [-]
The clever part is figuring out what RAM is controlled by which controllers.
saidnooneever 10 hours ago [-]
everyone says this but no one says why it was clever. i find her videos have cool results but i cant have patience for them usually because its recycled old stuff (can be cool but its not ground breaking).
there is a ton of info you can pull from: smbios, acpi, msrs, cpuid etc. etc. about cpu/ram topology and connecticity, latencies etc etc.
isnt the info on what controllers/ram relationships exists somewhere in there provided by firmware or platform?
i can hardly imagine it is not just plainly in there with the plethtora info in there...
theres srat/slit/hmat etc. in acpi, then theres MSRs with info (amd expose more than intel ofc, as always) and then there is registers on memory controller itself as well as socket to socket interconnects from upi links..
its just a lot of reading and finding bits here n there. LLms are actually really good at pulling all sorts of stuff from various 6-10k page documents if u are too lazy to dig yourself -_-
sumeno 5 hours ago [-]
It's very funny that you're giving a RTFM response to a video you admit you didn't watch.
WTFV
UltraSane 7 hours ago [-]
The exact mapping between RAM addresses and memory controllers is intentionally abstracted by the memory subsystem with many abstraction layers between you and the physical RAM locations.
Because documentation is sometimes incomplete or proprietary, security researchers often have to write software that probes memory and times the access speeds to reverse-engineer the exact interleaving functions of a specific CPU. in the video she says that ARM CPUs have the least data about this and she had to rely entirely on statistical methods.
kzrdude 8 hours ago [-]
I have to say that using drawbridges and differently colored rail pieces to explain it was very clever.
(It didn't get much frontpage time, so we won't treat the current post as a dupe)
freedomben 6 hours ago [-]
LaurieWired is so incredibly smart, and so incredibly nerdy :-D
Really enjoyed this video, and I'm pretty picky. I learned a lot, even though I already know (or thought I knew) quite a bit about this subject as it was a particular interest of mine in Comp Sci school. I highly recommend. Skip forward through chunks of the train part though where she is messing around. It does get more informative later though so don't skip all of the train part
drooopy 1 hours ago [-]
She and Technology Connections are two of my favourite YouTube channels. Also I love her geocities website so much: https://lauriewired.com/
rkagerer 14 hours ago [-]
Halfway through this great video and I have two questions:
1) Can we take this library and turn it into a a generic driver or something that applies the technique to all software (kernel and userspace) running on the system? i.e. If I want to halve my effective memory in order to completely eliminate the tail latency problem, without having to rewrite legacy software to implement this invention.
2) What model miniature smoke machine is that? I instruct volunteer firefighters and occasionally do scale model demos to teach ventilation concepts. Some research years back led me to the "Tiny FX" fogger which works great, but it's expensive and this thing looks even more convenient.
lauriewired 12 hours ago [-]
1. not that I can think of, due to the core split. It really has to be independent cores racing independent loads. anything clever you could do with kernel modules, page-table-land, or dynamically reacting via PMU counters would likely cost microseconds...far larger than the 10s-100s of nanoseconds you gain.
what I wished I had during this project is a hypothetical hedged_load ISA instruction. Issue two requests to two memory controllers and drop the loser. That would let the strategy work on a single thread! Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation. But, you’d have to convince Intel/AMD/someone else :)
2. It’s called a “smokeninja”. Fairly popular in product photography circles, it’s quite fun!
rkagerer 11 hours ago [-]
Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation.
Yeah it would be neat to just flip a BIOS switch and put your memory into "hedge" mode. Maybe one day we'll have an open source hardware stack where tinkerers can directly fiddle with ideas like this. In the meantime, thanks for your extensive work proving out the concept and sharing it with the world!
myself248 3 hours ago [-]
If you're able to do it at the memory controller level, would it be as simple as making two controllers always operate in lock-step, so their refresh cycles are guaranteed to be offset 50% from one another?
Given that the controller can already defer refresh cycles, and the logic to determine when that happens sounds fairly complex, I suspect that might already be in CPU microcode.
...which raises the tantalizing possibility that this lockstep-mirrored behavior might also be doable in microcode.
solstice 11 hours ago [-]
Is there a reason you can think of why AMD, Intel etc. would not want to do this?
Really enjoyed the video and feel that I (not being in the IT industry) better understand CPUs und and RAM now.
sumtechguy 6 hours ago [-]
I can not think of any reason they would not want to do it.
However, I do seem at least 2 downsides to this method.
Number one it is at least 2x the memory. That has for a decently long time been a large cost of a computer. But I could see some people saying 'whatever buy 8x'.
The second is data coherency. In a read only env this would work very nicely. In a write env this would be 2x the writes and you are going to have to wait for them to all work or somehow mark them as not ready on the next read group. Now it would be OK if the read of that page was some period of time after the write. But a different place where things could stall out.
Really liked her vid. She explained it very nicely. She exudes that sense of joy I used to have about this field.
hawk_ 12 hours ago [-]
> halve my effective memory in order to completely eliminate the tail latency problem,
Wouldn't you have a tail latency problem on the write side though if you just blindly apply it every where? As in unless all the replicas are done writing you can't proceed.
imp0cat 13 hours ago [-]
Brio 33884. It has a tiny ultrasonic humidifier in there.
boznz 14 hours ago [-]
Should say DRAM, SRAM does not have this.
guenthert 10 hours ago [-]
Indeed. And only for certain DRAM refresh strategies. I mean, it's at least conceivable that a memory management system responsible for the refresh notices that a given memory location is requested by the cache and then fills the cache during the refresh (which afaiu reads the memory) or -- simpler to implement perhaps -- delays the refresh by a μs allowing the cache-fill to race ahead.
Said that, I'm not convinced that this is a big issue in practice. If you really care about performance, you got to avoid cache misses.
namibj 7 hours ago [-]
None of the DDR2 and onwards memories have anywhere near enough bandwidth to meet refresh frequency on each bit by you even just reading it in a loop.
The refresh that we do is run in parallel on the memory arrays inside the RAM chips completely bypassing any of the related IO machinery.
guenthert 6 hours ago [-]
And those memory arrays cannot detect access from the bus?
I'm not saying that it's easy or cheap or worthwhile (I'd rather guess it's not in most cases), but I don't see why it couldn't be done.
dang 3 hours ago [-]
Ok I've consed a D onto the title above.
dwoldrich 5 hours ago [-]
Voxel Space[1] could have used this, would that multicore had been prevalent at the time. I recall being fascinated that simply facing the camera north or south would knock off 2fps from an already slow frame rate.
Many of our maps' routes would be laid out in a predominately east or west-facing track to max out our staying within cache lines as we marched our rays up the screen.
So, we needed as much main memory bandwidth as we could get. I remember experimenting with cache line warming to try to keep the memory controllers saturated with work with measurable success. But it would have been difficult in Voxel Space to predict which lines to warm (and when), so nothing came of it.
Tailslayer would have given us an edge by just splitting up the scene with multiprocessing and with a lot more RAM usage and without any other code. Alas, hardware like that was like 15 years in the future. Le sigh.
>Many of our maps' routes would be laid out in a predominately east or west-facing track
That's fascinating to find out! I grew up a fan of Nova Logic, so I'll have to pay attention to this the next time I revisit their games.
Was this done for Comanche or did you also do this for Delta Force?
yalogin 10 hours ago [-]
This is a cool idea, very well put through for everyone to understand such an esoteric concept.
However I wonder if the core idea itself is useful or not in practice. With modern memory there are two main aspects it makes worse. First is cost, it needs to double the memory used for the same compute. With memory costs already soaring this is not good. Then the other main issue of throughout, haven’t put enough thought into that yet but feels like it requires more orchestration and increases costs there too.
11 hours ago [-]
sbiru93 10 hours ago [-]
Doesn't doing this halve the computing power?
I don't know this world at all, is that acceptable?
fc417fc802 10 hours ago [-]
It halves (or thirds or quarters or etc) available CPU cores, cache space, memory bandwidth, all the critical resources. So I expect that it's only applicable for small reads that you are reasonably certain won't be in cache and that it can only be used extremely sparingly, otherwise it will be nothing but a massive drain.
josalhor 11 hours ago [-]
I haven't had time to see the whole thing yet, but I'm quite surprised this yielded good results. If this works I would have expected CPU implementations to do some optimization around this by default given the memory latency bottleneck of the last 1.5 decades. What am I missing here?
formerly_proven 11 hours ago [-]
Turning on mirroring does this for the low, low price of doubling your RAM cost.
13 hours ago [-]
bronlund 11 hours ago [-]
She could probably have been stinking rich on this work alone, but instead she just put it up on Github. Kudos to Laurie.
larodi 11 hours ago [-]
She probably is already stinking rich, or at least rich enough. Beyond certain point, though, research and knowledge seems more interesting than riches, and particularly if you feel yourself a researcher. Otherwise, perhaps, she be doing the same to business and be Ellona or something. Thank God she does not, but the contrary - is an inspiration to so many people - young and adult. Kudos!
ahoka 10 hours ago [-]
Companies are standing in line to double their RAM usage right now, right.
bronlund 8 hours ago [-]
For an HFT firm, RAM cost is a non-issue. Even the tiniest improvement in latency can result in millions of dollars of extra profit. They can octuple their RAM usage and still make a killing.
I bet Citadel already has reached out to Laurie :)
ahoka 7 hours ago [-]
[flagged]
bronlund 7 hours ago [-]
This is not a problem which needed to be fixed, it's an improvement in efficiency - though a costly one. We are talking about a company which do make their own custom microchips so you could very well be right, but it may also be that they weren't even aware this was possible.
Citadel executes trades in about 10 microseconds, so a 500 nanosecond reduced execution time is a 5% improvement. For a company which executes trades for hundreds of billions a day, this translates to real money.
Your sarcasm indicates that you have no clue as to how important such an improvement can be for some actors. Some do though; the repo has almost 100 forks and 2K stars after just two days.
sumtechguy 5 hours ago [-]
I saw a few years ago one group buying spools of fiber just to 'slow down' the trades. As they were submitting them to different datacenters across the country. They wanted everything to show up at the exact same time so no one would front run their trades on different datacenters. They are willing spend millions on HW if it gives them an edge in the market. They would buy bespoke boards that could hold 16x the RAM if it gave them a 50ns edge.
bronlund 5 hours ago [-]
Yes, this is IEX. Some guy wrote a book about them called "Flash Boys'.
john_strinlai 5 hours ago [-]
>don't need to watch an iCarly fan fiction on youtube
this is unnecessary.
bronlund 4 hours ago [-]
Unnecessary for us maybe, but who knows what kind of bottled-up rage this guys has :)
gkbrk 8 hours ago [-]
Depends how much total RAM your application needs and how much money RAM access tail latency costs your business.
10 hours ago [-]
hpcgroup 4 hours ago [-]
[flagged]
rcbdev 13 hours ago [-]
Am I the only one who feels the comments here don't sound organic at all?
tredre3 12 hours ago [-]
No I felt the same way, they're exactly like the usual LLM bot comment where a LLM recap ops and ends with an platitude or witty encouragement.
But all the accounts are old/legit so I think that you and me have just become paranoid...
wkjagt 9 hours ago [-]
I have become oversensitive to this, and my brain is probably generating a lot of false positives. I don't think it's necessarily the case here, but I've wondered if people who use LLMs a lot take over some of its idiosyncrasies and in a way start sounding like one a bit. A strange side effect is that I've come to appreciate text with grammatical errors, videos where people don't enunciate well etc because it's a sign that it's human created content.
kome 6 hours ago [-]
[flagged]
3 hours ago [-]
perching_aix 6 hours ago [-]
When you use LLMs all day, their writing style rubs off on you. From wording to structure.
It's like when you interact with any other piece of language oriented media.
v1ne 9 hours ago [-]
I think it's more people being fascinated by this curious architectural detail.
I imagine it's fascinating to people who are not exposed to the intricate details of computer architecture, which I assume is the vast majority here. It's a glimpse into a very odd world (which is your day-to-day work in the HFT field, but they rarely talk about this, and much less in such big words).
TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.
And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.
So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.
8 hours ago [-]
isoprophlex 13 hours ago [-]
You're absolutely right
silisili 12 hours ago [-]
You're absolutely right to call this out. No humans, no emotion, no real comments - just LLM slop.
In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.
Alifatisk 13 hours ago [-]
I don’t see anything unusual
ModernMech 6 hours ago [-]
Thank you I was picking up on that too. Maybe she has fans here or something but the vibe is off.
guenthert 10 hours ago [-]
No, something is funny here. In the previous submission (https://news.ycombinator.com/item?id=47680023) the only (competently) criticizing comment (by jeffbee) was downvoted into oblivion/flagged.
fc417fc802 9 hours ago [-]
Well he veered off of the technical and into the personal so I'm not surprised it's dead. But yeah something feels weird about this comment section as a whole but I can't quite put my finger on it.
I think rather than AI it reminds me of when (long before AI) a few colleagues would converge on an article to post supportive comments in what felt like an attempt to manipulate the narrative and even at concentrations that I find surprisingly low it would often skew my impression of the tone of the entire comment section in a strange way. I guess you could more generally describe the phenomenon as fan club comments.
ralfd 8 hours ago [-]
It is one of the few instances were the reddit discussion seems more normal/indepth. See the longer comments here:
There are a few glazing comments there too though.
> Well he veered off of the technical and into the personal so I'm not surprised it's dead.
I don't know what he posted, but it is easy to see how a small fan group around Laurie can form?
She is an attractive girl not afraid to be cute (which is done so seldom by women in tech that I found a reddit thread trying to triangulate if she is trans. I am not posting that to raise the question, but she piques peoples interest) plus the impressively high effort put into niche topics PLUS the impressively high production value to present all that.
6 hours ago [-]
john_strinlai 4 hours ago [-]
it was flagged because it was unnecessarily rude. nothing "funny" going on (with that comment chain at least).
i would note that it also appears to be wrong, reading laurie's reply, though i am not an expert. rude + wrong is a bad combo.
the next comment by jeffbee is also quite rude, and ignores most of laurie's reply in favor of insulting her instead. i dont think it is a mystery why jeffbee's comments were flagged...
villgax 8 hours ago [-]
[flagged]
rachr 6 hours ago [-]
Are we looking at the same account? Unless it's some project like this, she doesn't even tweet every day. Just promotion of her own content.
This is an unreasonably good video. Hopefully, it inspires others to see we can still think hard and critically about technical things.
deathanatos 11 hours ago [-]
Yeah, wow, the comments weren't kidding. This'll probably be the best video I watch all month, at least, if not more. I would have said what she was trying to do was "impossible" (had I not seen the title and figured … well … she posted the video) and right about when I was thinking that she got me with:
> Hold on a second. That's a really bad excuse. And technology never got anywhere by saying I accept this and it is what it is.
t1234s 7 hours ago [-]
Probably will get a lot of views from guys who have no idea what she is talking about.
actionfromafar 7 hours ago [-]
Surely, but that's the baseline for most videos regardless of topic and presenter.
jqbd 7 hours ago [-]
Being a woman in tech seems to have some benefits at least on YouTube
Rendered at 18:17:19 GMT+0000 (Coordinated Universal Time) with Vercel.
The hedging technique is a cool demo too, but I’m not sure it’s practical.
At a high level it’s a bit contradictory; trying to reduce the tail latency of cold reads by doubling the cache footprint makes every other read even colder.
I understand the premise is “data larger than cache” given the clflush, but even then you’re spending 2x the memory bandwidth and cache pressure to shave ~250ns off spikes that only happen once every 15us. There’s just not a realistic scenario where that helps.
Especially HFT is significantly more complex than a huge lookup table in DRAM. In the time you spend doing a handful of 70ns DRAM reads, your competitor has done hundreds of reads from cache and a bunch of math. It’s just far better to work with what you can fit in cache. And to shrink what doesn’t as much as possible.
That’s my main hang up as well. On one hand this is undeniably cool work, but on the other, efficient cache usage is how you maximize throughput.
This optimizes for (narrow) tail latency, but I do wonder at what performance cost. I would be super interested in hearing about real world use cases.
What’s better is to “race” against cache, which is 100x faster than DRAM. CPUs already of do this for independent loads via out-of-order execution. While one load is stalled waiting for DRAM, another can hit the cache and do some compute in parallel. It’s all already handled at the microarchitectural level.
Isn't that rather trivial though as a source of tail latency? There's much worse spikes coming from other sources, e.g. power management states within the CPU and possibly other hardware. At the end of the day, this is why simple microcontrollers are still preferred for hard RT workloads. This work doesn't change that in any way.
https://news.ycombinator.com/item?id=43850950 (April 2025)
https://news.ycombinator.com/item?id=43847946 (April 2025)
https://news.ycombinator.com/item?id=42096833 (Nov 2024)
https://news.ycombinator.com/item?id=37275963 (Aug 2023)
https://news.ycombinator.com/item?id=35746140 (April 2023)
https://news.ycombinator.com/item?id=34537078 (Jan 2023)
https://news.ycombinator.com/item?id=33914274 (Dec 2022)
https://news.ycombinator.com/item?id=33311881 (Oct 2022)
https://news.ycombinator.com/item?id=30890360 (April 2022)
https://news.ycombinator.com/item?id=26628758 (March 2021)
https://news.ycombinator.com/item?id=26307811 (March 2021)
https://news.ycombinator.com/item?id=25561372 (Dec 2020)
https://news.ycombinator.com/item?id=24724281 (Oct 2020)
https://news.ycombinator.com/item?id=24458954 (Sept 2020)
https://news.ycombinator.com/item?id=24380545 (Sept 2020)
https://news.ycombinator.com/item?id=23170477 (May 2020)
The reason we haven't banned you yet is because you obviously know a lot of things that are of interest to the community. That's good. But the damage you cause here by routinely poisoning the threads exceeds the goodness that you add by sharing information. This is not going to last, so if you want not to be banned on HN, please fix it.
https://news.ycombinator.com/newsguidelines.html
RAM Has a Design Tradeoff from 1966. I made another one on top.
The first tradeoff, of 6x fewer transistors for some extra latency, is immensely beneficial. The second, of reducing some of that extra latency for extra copies of static data, is beneficial only to some extremely niche application. Still a very educational video about modern memory architecture.
[EDIT: accidental extra copy of this comment deleted]
Laurie does an amazing job of reimagining Google's strange job optimisation technique (for jobs running on hard disk storage) that uses 2 CPUs to do the same job. The technique simply takes the result of the machine that finishes it first, discarding the slower job's results... It seems expensive in resources, but it works and allows high priority tasks to run optimally.
Laurie re-imagines this process but for RAM!! In doing this she needs to deal with Cores, RAM channels and other relatively undocumented CPU memory management features.
She was even able to work out various undocumented CPU/RAM settings by using her tool to find where timing differences exposed various CPU settings.
She's turned "Tailslayer" into a lib now, available on Github, https://github.com/LaurieWired/tailslayer
You can see her having so much fun, doing cool victory dances as she works out ways of getting around each of the issues that she finds.
The experimentation, explanation and graphing of results is fantastic. Amazing stuff. Perhaps someone will use this somewhere?
As mentioned in the YT comments, the work done here is probably a Master's degrees worth of work, experimentation and documentation.
Go Laurie!
Update: found the bypass via the youtube blurb: https://github.com/LaurieWired/tailslayer
"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.
"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."
1. Throw the video into notebooklm - it gives transcripts of all youtube videos (AFAIK) - go to sources on teh left and press the arrow key. Ask notbookelm to give you a summary, discuss anything etc.
2. Noticed that youtube now has a little Diamond icon and "Ask" next to it between the Share icon and Save icon. This brings up gemini and you can ask questions about the video (it has no internet access). This may be premium only. I still prefer Claude for general queries over Gemini.
Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?
The three answers it found were:
- Avoiding lock-in to them: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1914
- Competitive advantage: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1852
- Perceived Lack of Use Case: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1971
Those points do actually exist in the video, I checked. If there are more, I don't know about them, as I haven't yet watched the rest of the video.
The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.
For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...
It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.
Another source of confusion is that some channels may not have it or some other unexplained reason: https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_as...
But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.
MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).
This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.
It is sad that read comprehension is dropping such that you interpreted my comment that way.
I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol
You can consume technical content in the background?
I guess I'm only allowed to have The Masked Singer on while I make dinner.
For years I've been thinking "I should watch the WWDC videos because there's a lot of Really Important Information" in there, but... they're videos. In general I find that I can't pay attention to spoken word (videos, presentations, meetings) that contain important information, probably because processing it costs a lot more energy than reading.
But then I tune out / fall asleep when trying to read long content too, lmao. Glad I never did university or do uni level work.
This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.
The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.
Particularly by experimenting on something proprietary like Graviton.
Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.
You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore. Honest question because I am out if touch.
Home PCs don’t do NUMA as much anymore because of the number of cores and threads you can get on one core complex. The technology certainly still exists and is still relevant.
The results are impressive, but for the vast, vast majority of applications the actual speedup achieved is basically meaningless since it only applies to a tiny fraction of memory accesses.
For the use case Laurie mentioned - i.e. high-frequency trading - then yes, absolutely, it's valuable (if you accept that a technology which doesn't actually achieve anything beyond transmuting energy into money is truly valuable).
For the rest of us, the last thing the world needs is a new way to waste memory, especially given its current availability!
Can you give more context on this? Opus couldn't figure out a reference for it
https://en.wikipedia.org/wiki/Happy_Eyeballs is the usual name. It's not quite identical, since you often want to give your preferred transport a nominal headstart so it usually succeeds. But yes, there are some similarities -- you race during connection setup so that you don't have to wait for a connection timeout (on the order of seconds) if the preferred mechanism doesn't work for some reason.
The main term I've seen for this particular approach is "request hedging" (https://grpc.io/docs/guides/request-hedging/, which links to the paper by Dean and Barroso).
there is a ton of info you can pull from: smbios, acpi, msrs, cpuid etc. etc. about cpu/ram topology and connecticity, latencies etc etc.
isnt the info on what controllers/ram relationships exists somewhere in there provided by firmware or platform?
i can hardly imagine it is not just plainly in there with the plethtora info in there...
theres srat/slit/hmat etc. in acpi, then theres MSRs with info (amd expose more than intel ofc, as always) and then there is registers on memory controller itself as well as socket to socket interconnects from upi links..
its just a lot of reading and finding bits here n there. LLms are actually really good at pulling all sorts of stuff from various 6-10k page documents if u are too lazy to dig yourself -_-
WTFV
(It didn't get much frontpage time, so we won't treat the current post as a dupe)
Really enjoyed this video, and I'm pretty picky. I learned a lot, even though I already know (or thought I knew) quite a bit about this subject as it was a particular interest of mine in Comp Sci school. I highly recommend. Skip forward through chunks of the train part though where she is messing around. It does get more informative later though so don't skip all of the train part
1) Can we take this library and turn it into a a generic driver or something that applies the technique to all software (kernel and userspace) running on the system? i.e. If I want to halve my effective memory in order to completely eliminate the tail latency problem, without having to rewrite legacy software to implement this invention.
2) What model miniature smoke machine is that? I instruct volunteer firefighters and occasionally do scale model demos to teach ventilation concepts. Some research years back led me to the "Tiny FX" fogger which works great, but it's expensive and this thing looks even more convenient.
what I wished I had during this project is a hypothetical hedged_load ISA instruction. Issue two requests to two memory controllers and drop the loser. That would let the strategy work on a single thread! Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation. But, you’d have to convince Intel/AMD/someone else :)
2. It’s called a “smokeninja”. Fairly popular in product photography circles, it’s quite fun!
Yeah it would be neat to just flip a BIOS switch and put your memory into "hedge" mode. Maybe one day we'll have an open source hardware stack where tinkerers can directly fiddle with ideas like this. In the meantime, thanks for your extensive work proving out the concept and sharing it with the world!
Given that the controller can already defer refresh cycles, and the logic to determine when that happens sounds fairly complex, I suspect that might already be in CPU microcode.
...which raises the tantalizing possibility that this lockstep-mirrored behavior might also be doable in microcode.
Really enjoyed the video and feel that I (not being in the IT industry) better understand CPUs und and RAM now.
However, I do seem at least 2 downsides to this method.
Number one it is at least 2x the memory. That has for a decently long time been a large cost of a computer. But I could see some people saying 'whatever buy 8x'.
The second is data coherency. In a read only env this would work very nicely. In a write env this would be 2x the writes and you are going to have to wait for them to all work or somehow mark them as not ready on the next read group. Now it would be OK if the read of that page was some period of time after the write. But a different place where things could stall out.
Really liked her vid. She explained it very nicely. She exudes that sense of joy I used to have about this field.
Wouldn't you have a tail latency problem on the write side though if you just blindly apply it every where? As in unless all the replicas are done writing you can't proceed.
(seems that in the earlier submission, https://news.ycombinator.com/item?id=47680023, jeffbee hinted that IBM zEnterprise is doing something to that effect)
Said that, I'm not convinced that this is a big issue in practice. If you really care about performance, you got to avoid cache misses.
The refresh that we do is run in parallel on the memory arrays inside the RAM chips completely bypassing any of the related IO machinery.
I'm not saying that it's easy or cheap or worthwhile (I'd rather guess it's not in most cases), but I don't see why it couldn't be done.
Many of our maps' routes would be laid out in a predominately east or west-facing track to max out our staying within cache lines as we marched our rays up the screen.
So, we needed as much main memory bandwidth as we could get. I remember experimenting with cache line warming to try to keep the memory controllers saturated with work with measurable success. But it would have been difficult in Voxel Space to predict which lines to warm (and when), so nothing came of it.
Tailslayer would have given us an edge by just splitting up the scene with multiprocessing and with a lot more RAM usage and without any other code. Alas, hardware like that was like 15 years in the future. Le sigh.
[1] https://en.wikipedia.org/wiki/Voxel_Space
That's fascinating to find out! I grew up a fan of Nova Logic, so I'll have to pay attention to this the next time I revisit their games.
Was this done for Comanche or did you also do this for Delta Force?
However I wonder if the core idea itself is useful or not in practice. With modern memory there are two main aspects it makes worse. First is cost, it needs to double the memory used for the same compute. With memory costs already soaring this is not good. Then the other main issue of throughout, haven’t put enough thought into that yet but feels like it requires more orchestration and increases costs there too.
I bet Citadel already has reached out to Laurie :)
Citadel executes trades in about 10 microseconds, so a 500 nanosecond reduced execution time is a 5% improvement. For a company which executes trades for hundreds of billions a day, this translates to real money.
Your sarcasm indicates that you have no clue as to how important such an improvement can be for some actors. Some do though; the repo has almost 100 forks and 2K stars after just two days.
this is unnecessary.
But all the accounts are old/legit so I think that you and me have just become paranoid...
It's like when you interact with any other piece of language oriented media.
TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.
And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.
So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.
In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.
I think rather than AI it reminds me of when (long before AI) a few colleagues would converge on an article to post supportive comments in what felt like an attempt to manipulate the narrative and even at concentrations that I find surprisingly low it would often skew my impression of the tone of the entire comment section in a strange way. I guess you could more generally describe the phenomenon as fan club comments.
https://www.reddit.com/r/programming/comments/1sgtkdf/tailsl...
There are a few glazing comments there too though.
> Well he veered off of the technical and into the personal so I'm not surprised it's dead.
I don't know what he posted, but it is easy to see how a small fan group around Laurie can form?
She is an attractive girl not afraid to be cute (which is done so seldom by women in tech that I found a reddit thread trying to triangulate if she is trans. I am not posting that to raise the question, but she piques peoples interest) plus the impressively high effort put into niche topics PLUS the impressively high production value to present all that.
i would note that it also appears to be wrong, reading laurie's reply, though i am not an expert. rude + wrong is a bad combo.
the next comment by jeffbee is also quite rude, and ignores most of laurie's reply in favor of insulting her instead. i dont think it is a mystery why jeffbee's comments were flagged...
Not slop, seems mildly interesting
> Hold on a second. That's a really bad excuse. And technology never got anywhere by saying I accept this and it is what it is.