NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
TPDE-LLVM: Faster LLVM -O0 Back-End (discourse.llvm.org)
derefr 2 minutes ago [-]
For a moment I misread this, and thought LLVM had introduced an optimization flag specifically for when your code will be used in backend software.

(Which... that seems like an interesting idea, now that I think of it. What, if anything, could such a flag hint the compiler to do?)

testdelacc1 6 hours ago [-]
LLVM is the code generation backend used in several languages, like Rust and one of the many compilers for C and C++ (clang). Code generated by these compilers is considered “fast/performant” thanks to LLVM.

The problem with LLVM has always been that it takes a long time to produce code. The post in the link promises a new backend that produces a slower artifact, but does so 10-20x quicker. This is great for debug builds.

This doesn’t mean the compilation as a whole gets quicker. There are 3 steps in compilation

- Front end: transforms source code into an LLVM intermediation representation (IR)

- Backend: this is where LLVM comes in. It accepts LLVM IR and transforms it into machine code

- Linking: a separate program links the artifacts produced by LLVM.

How long does each step take? Really depends on the program we’re trying to compile. This blog post contains timings for one example program (https://blog.rust-lang.org/2023/11/09/parallel-rustc/) to give you an idea. It also depends on whether LLVM is asked to produce a debug build (not performant, but quicker to produce) or a release build (fully optimised, takes longer).

The 10-20x improvement described here doesn’t work yet for clang or rustc, and when it does it will only speed up the backend portion. Nevertheless, this is still an incredible win for compile times because the other two steps can be optimised independently. Great work by everyone involved.

aengelke 5 hours ago [-]
In terms of runtime performance, the TPDE-generated code is comparable with and sometimes a bit faster than LLVM -O0.

I agree that front-ends are a big performance problem and both rustc and Clang (especially in C++ mode) are quite slow. For Clang with LLVM -O0, 50-80% is front-end time, with TPDE it's >98%. More work on front-end performance is definitely needed; maybe some things can be learned from Carbon. With mold or lld, I don't think linking is that much of a problem.

We now support most LLVM-IR constructs that are frequently generated by rustc (most notably, vectors). I just didn't get around to actually integrate it into rustc and get performance data.

> The 10-20x improvement described here doesn’t work yet for clang

Not sure what you mean here, TPDE can compile C/C++ programs with Clang-generated LLVM-IR (95% of llvm-test-suite SingleSource/MultiSource, large parts of the LLVM monorepo).

tialaramex 6 hours ago [-]
IMO the worst problem with LLVM isn't that it's slow, the worst problem is that its IR has poorly defined semantics or its team doesn't actually deliver those semantics and a bug ticket saying "Hey, what gives?" goes in the pile of never-never tickets, making it less useful as a compiler backend even if it was instant.

This is the old "correctness versus performance" problem and we already know that "faster but wrong" isn't meaningfully faster it's just wrong, anybody can give a wrong answer immediately and so that's not at all useful.

randomNumber7 5 hours ago [-]
What is the alternative though for a new language though? Transpiring to C or hacking s.th. by using the GCC backend?
tialaramex 5 hours ago [-]
The easy alternative? There isn't one.

The really difficult thing would be to write a new compiler backend with a coherent IR that everybody understands and you'll stick to. Unfortunately you can be quite certain that after you've done the incredible hard work to build such a thing, a lot of people's assessment of your backend will be:

1. The code produced was 10% slower than LLVM, never use this, speed is all that matters anyway and correctness is irrelevant.

2. This doesn's support the Fongulab Splox ZV406 processor made for six years in the 1980s, whereas LLVM does, therefore this is a waste of time.

mamcx 1 hours ago [-]
Ok, but then if this were done, then you could also emit LLVM after. It probably get worse timings, but, allow to make the transition palatable
IshKebab 5 hours ago [-]
Or native code generation. Depends on what your performance goals are. It would be cool if there was a standard IR that languages could target - something more suitable than C.
pjmlp 5 hours ago [-]
Being pursued since UNCOL in 1958, each attempt eventually only works out for a specific set of languages, due to politics or market forces.
ahartmetz 5 hours ago [-]
Hm. SPIR-V is a standard IR, but AFAIU not really the kind of IR that you need for communication inside a compiler. It wasn't designed for that.
rafaelmn 2 hours ago [-]
WASM ?
nerpderp82 54 minutes ago [-]
WASM !
pjmlp 5 hours ago [-]
Produce a dumb machine code quality, enough to bootstrapt it, and go from there.

Move away from classical UNIX compiler pipelines.

However in current times, I would rather invest into LLM improvements into generating executables directly, the time to mix AI into compiler development has come, and classical programming languages are just like doing yet another UNIX clone, in terms of value.

taminka 3 hours ago [-]
mm, a non deterministic compiler with no way to verify correctness, what could go wrong lol
pjmlp 34 minutes ago [-]
Ask C and C++ developers, they are used to it, and still plenty of critical software keeps being written with them.
ObscureScience 6 hours ago [-]
[flagged]
testdelacc1 5 hours ago [-]
No it wasn’t. There’s no way I can prove that it wasn’t.

But I can prove that this comment wasn’t LLM generated -> fuck you.

(LLMs don’t swear)

tialaramex 5 hours ago [-]
Maybe HN should add "Don't accuse comments of being LLM generated" to the guidelines, because this sure seems like it'll be in the same category as people moaning that they were downvoted or more closely people saying "Have you read the link?"
tomhow 4 hours ago [-]
We've talked about this but we're not adding it to the guidelines. It's already covered indirectly by the established guidelines, and "case law" (in the form of moderator replies) makes it explicit.
testdelacc1 5 hours ago [-]
I feel like a fuck you to the accuser is sufficient. It proves that you’re not an LLM and is a reasonable response to an unfounded accusation.

LLMs decline when asked to say fuck you. Gemini: “I am unable to respond to that request.” Claude: “I’d rather not use profanity unprompted.”

But allowing a fuck you would need a modification to the rules anyway, I suppose.

tomhow 4 hours ago [-]
Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.
procrast33 2 hours ago [-]
I am curious why the TPDE paper does not mention the Copy-And-Patch paper. That is a technique that uses LLVM to generate a library of patchable machine code snippets, and during actual compilation those snippets are simply pasted together. In fairness, it is just a proof of concept: they could compile WASM to x64 but not C or C++.

I have no relation to the authors.

https://fredrikbk.com/publications/copy-and-patch.pdf

aengelke 2 hours ago [-]
There's a longer paragraph on that topic in Section 8. We also previously built an LLVM back-end using that approach [1]. While that approach leads to even faster compilation, run-time performance is much worse (2.5x slower than LLVM -O0) due to more-or-less impossible register allocation for the snippets.

[1]: https://home.cit.tum.de/~engelke/pubs/2403-cc.pdf

debugnik 1 hours ago [-]
> run-time performance is much worse (2.5x slower than LLVM -O0)

How come? The Copy-and-Patch Compilation paper reports:

> The generated code runs [...] 14% faster than LLVM -O0.

I don't have time right now to compare your approach and benchmark to theirs, but I would have expected comparable performance from what I had read back then.

procrast33 46 minutes ago [-]
Apologies! I did do a text search, but in pdfs... I should have known better.

Your work is greatly appriciated. With unit tests everywhere, faster compiling is more important than ever.

PoignardAzur 2 days ago [-]
It feels sometimes that compiler backend devs haven't quite registered the implications of the TDPE paper.

As far as I can tell it gets pareto improvements far above LLVM, Cranelift, and any WebAssembly backend out there. You'd expect there to be a rush to either adopt their techniques or at least find arguments why they wouldn't work for generalist use cases, but instead it feels like maintainers of the above project have absolutely no curiosity about it.

pjmlp 5 hours ago [-]
Most contributions are driven by university students papers that somehow managed to get merged into LLVM, there is hardly a product manager driving its roadmap, thus naturally not everything gets the same attention span.
almostgotcaught 10 minutes ago [-]
I see you comment here all the time about LLVM and I'm pretty sure you have no idea how LLVM works (technically or organizationally).

Yes there's no roadmap but it's flat out wrong that most contributions are from university students because hardly any contributions are from university students. It's actually an issue because we should be more supportive of student contributors! You can literally look at the top contributors on GitHub and see that probably the top 100 are professionals contributing as part of their dayjob:

https://github.com/llvm/llvm-project/graphs/contributors

pertymcpert 7 hours ago [-]
There's no free lunch. I don't know where you're getting this "Pareto improvements" thing from because it's a much more constrained codegen framework than LLVM's backend. It supports a much smaller subset, and real world code built by LLVM will use a lot of features like vectors even at -O0 for things like intrinsics. There's a maintenance cost to having a completely separate path for -O0.
gbbcf 8 hours ago [-]
Care to enlighten us? Maybe it's just you overestimating the implications
webdevver 2 days ago [-]
whats "the TDPE paper"?
woadwarrior01 3 hours ago [-]
I hope this trickles down to the Swift compiler, sometime soon.
baq 6 hours ago [-]
great news! but linking...
ahartmetz 5 hours ago [-]
There are lld and mold for that.
Agingcoder 7 hours ago [-]
Turbo pascal 3 is back !
twothreeone 9 hours ago [-]
That rare Comex comment.
leoh 2 days ago [-]
I'm glad they mentioned this. When I use Gentoo, though, always compile with -O8 for extreme performance.
wmf 2 days ago [-]
I thought everybody moved to Arch. I hear they patched the compiler to go to -O11.
jeffreygoesto 2 days ago [-]
Beware, this only works outside of Scotland!
leoh 2 days ago [-]
Smart!!
syrusakbary 2 days ago [-]
This is awseome, I didn't know this was possible and may replace the need of using Cranelift for having fast builds of Wasm bytecode into assembly in Wasmer.
SkiFire13 2 days ago [-]
> We support a typical subset of LLVM-IR

I wonder what such a "typical" subset is. How exotic should something be to not work with it?

aengelke 2 days ago [-]
The documentation has a list of currently unsupported features: https://docs.tpde.org/tpde-llvm-main.html
pella 10 hours ago [-]
old discussion: https://news.ycombinator.com/item?id=45084111 ( 2 days ago )
aw1621107 9 hours ago [-]
Interestingly, mpweiher actually submitted first. If you hover over the timestamp for this post you'll see 2025-08-30T06:55:30, while the "old" discussion has a timestamp of 2025-08-31T15:50:35. This particular post got put into the second chance pool [0] and I'm guessing HN's backend decided that now is the time to put it on the front page.

[0]: news.ycombinator.com/pool

tomhow 5 hours ago [-]
Very well spotted. That happens sometimes. Given that this submission is the original for this article, I've moved the comments from https://news.ycombinator.com/item?id=45084111 to here, and because there wasn't a huge discussion previously, we'll let this stay on the front page for a while longer.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:19:40 GMT+0000 (Coordinated Universal Time) with Vercel.