There's a tradeoff between dense models and MoEs on memory usage vs. compute for the same quality.
For example, Qwen3.5 27B and Qwen3.5 122B A10B have similar average performance across benchmarks. The 122B is much faster to run than the 27B (generates more tokens at the same compute). The 27B, on the other hand, uses ~4x less VRAM at low context lengths (less difference at high context lengths).
Right now, different hardware seems to be suited to different points in the dense vs. MoE balance. On one extreme is hardware like the DGX Spark and Strix Halo which have a lot of memory compared to compute performance and memory bandwidth, and are best-suited for MoE workflows. On the other extreme you have cards like RTX 5090 which have very high performance for the price but rather little memory, and is best suited for dense models.
The Arc Pro B70 seems to be the awkward middle. With 1-2 of these, you can run a ~30B dense model slowly, probably not fast enough to be useful interactively (you'd probably need a 5090 or 2x 3090 for that). Or, you can run a MoE model at high throughput, but probably not enough quality to support agentic workflows that actually use your throughput.
storus 12 hours ago [-]
DGX Spark is at the compute level of 5070. Its main issue is low memory bandwidth, i.e. it has quite fast token prefill but awful token generation. Strix Halo is just slow on every metric and used to be a cheap way to get 128GB unified RAM (now its prices are comparable to DGX Spark).
Readerium 6 hours ago [-]
LLMs are memory bandwidth bound not compute bound.
ondra 5 hours ago [-]
This is incorrect, prompt processing is compute bound.
AntiUSAbah 3 hours ago [-]
LLMs are bound by both and depends on the hardware which factor is higher.
icelancer 4 hours ago [-]
This is only true for some parts of the time cost function.
BoredPositron 12 hours ago [-]
I am working mostly with image models so we do a lot of fun times and the card fits perfectly here. Performance isn't great but it can just tug along in the background.àp
varispeed 11 hours ago [-]
I still not see the point running these models. I say they produce plausible garbage, nowhere near quality of frontier models (when they work).
Why can't Intel look beyond this nonsense state of affair and build something with 1TB of RAM or more?
What I am trying to say, I am yet to see anything competitive in the market. Cards very much stalled in sub 100GB region and best corporations can do is throw something to run toy models and forget about it after a week.
AlotOfReading 7 hours ago [-]
What's wrong with Grace Hopper if you want to throw buckets of local memory at a problem?
MrDrMcCoy 3 hours ago [-]
Some people, including myself, loathe Nvidia with the fiery burning passion of a thousand suns, and will put up with whatever nonsense is necessary to run without them.
varispeed 2 hours ago [-]
Most consumer platforms only allow up to 128/256GB of RAM. If you want more you likely need a data centre platform. This is again a mismatch between what companies think consumers are at and the reality.
I think e.g. AMD missed the boat with 9950x3d2 by limiting memory controller. If it was possible to hook it with 1TB of consumer DDR5 RAM, that would be something to write home about.
speedgoose 14 hours ago [-]
Time to first token is a very important performance metric, as I figured out using a Mac Studio M3 Ultra (that is quite slow on this aspect).
But 32GB for a TDP of 230W is perhaps not super interesting. Especially because you probably want to have more than one card. It's a lot of heat. You could use the cards for heating up a building, but heatpumps exist.
bigyabai 14 hours ago [-]
A lot of the TDP is reserved for running the shader units at full-power. My RTX 3070 Ti only pulls ~110w of it's 320w running CUDA inference on Gemma 26b and E4B.
Scaevolus 14 hours ago [-]
It's not that it's reserving power, but rather that you hit some bottleneck on a 3070 Ti before running into thermal limits-- it's likely limited by either tensor core saturation or RAM throughput. Running the workload with Nvidia's profiling tools should make the bottleneck obvious.
lambda 13 hours ago [-]
Generally the bottleneck is RAM throughput. Inference, in particular token generation, especially on a single user instance, is not all that computationally complex; you're doing some fairly simple calculations for each parameter, the time is dominated by just transferring each parameter from RAM to the cores. A 31B dense model like Gemma 4 has to transfer 31B parameters (at 16 bits per parameter for the full model, though on consumer hardware people generally run 4-8 bit quantizations) from RAM to the cores, that's a lot of memory transfer.
Prompt processing or parallel token generation can do a bit more work per memory transfer, as you can use the same weights for a few different calculations in parallel. But even still, memory bandwidth is a huge factor.
10 hours ago [-]
ycui7 4 hours ago [-]
B70 idles at 30W, while RTX PRO 4500 idles at 9W (measured to be 5W at wall).
B70 runs at 1/3 token output rate of RTX PRO 4500 and consume 3X idle power when do nothing.
culopatin 6 hours ago [-]
My 4070 super and 5070 super both max out their tdp when I use them with ollama, is your usage different?
gambiting 12 hours ago [-]
My 5090 runs at full TDP(pretty much exactly 575W) when running inference through LM Studio.
rao-v 9 hours ago [-]
Cap the power to 400W you won’t see much impact
gardnr 8 hours ago [-]
Same throughput with much less heat. Not sure what that extra 175w is going towards but it's diminishing returns.
dwoldrich 9 hours ago [-]
Hi Intel, I'm itching to buy an Xe3P! Or, Nova Lake? Crescent Island? Celestial? Jaguar Shores?
Whatever the hell you name it doesn't matter to me, I just want a workstation with one of them bad boys attached to 160GB of RAM for legit inference power!
I've been saving my money not paying for Claude Code so I can run my own agentic coding setup at home on yours. Please don't charge too much for the workstation class card if you can at all manage it. Maybe give us a discount to preorder? Please don't price a regular consumer like me out of the market!
Also, I am speculating integer based models will become hot due to lower memory and power requirements. Will the Xe3P be able to do integer-based math inference to use all that RAM to even greater effect?
als0 4 hours ago [-]
> Please don't charge too much for it
Intel wouldn’t decide to do this even to save their own life
5 hours ago [-]
greybcg 1 hours ago [-]
Something that is also cool with these cards is proper SR-IOV without hassle. Arc pro cards make for nice graphical acceleration devices for vms. I know ai gets all the hype but I also appreciate being able to accelerate multiple workstations with a single gpu and still get decent frametimes.
ycui7 8 hours ago [-]
Intel Arc B70 when released, can only produce 1/3 of the token of RTX PRO 4500. Well, it also cost 1/3 of RTX PRO 4500.
It lacked software support the for the primary target application, running LLM. The officially supported vllm fork is 6 version behind mainline. It did not run the latest hot new open models on huggingface. Parallel two of B70 reduce token rate, not improve it. So, the software behind B70 is basically so far behind.
adrian_b 3 hours ago [-]
What you say is not consistent with TFA.
The parent article shows that B70 is faster than RTX 4000.
RTX 4500 is faster than RTX 4000, but it cannot be more than 3 times faster, not even more than 2 times faster.
The parent article is consistent with RTX 4500 being faster than B70 for ML inference, but by a much smaller ratio, e.g. less than 50% faster.
If you know otherwise, please point to the source.
If you have run a benchmark yourself, please describe the exact conditions.
In the benchmarks shown at Phoronix for llama.cpp, the relative performance was extremely variable for different LLMs, i.e. for some LLMs a B70 was faster than RTX 4000, but for others it was significantly slower.
Your 3x performance ratio may be true for a particular LLM with a certain quantization, but false for other LLMs or other quantizations.
This performance variability may be caused by immature software for B70. For instance instead of using matrix operations (XMX engines), non-optimized software might use traditional vector operations, which are slower.
It is also possible that for optimum performance with a certain LLM one may need to choose a different quantization for B70 than for NVIDIA, because for sub-16-bit number formats Intel supports only integer numbers.
muyuu 7 hours ago [-]
There are nonlinearities to exploit in that calculus. Given enough VRAM to host a larger model that you're targeting, just the size can push you past the usability threshold at a much better price.
ycui7 4 hours ago [-]
When you get 4 of these, the idle power alone is 120W. That is a lot of electricity if left on 24/7.
At that power consumption, you also end up being more expensive than API calls and many times slower. It starts to feel very stupid to run local interference.
If the client is very keen on privacy, then they can pay for the NVIDIA.
I end up returning my B70s, and bought RTX PRO 6000.
ycui7 4 hours ago [-]
Problem is the more B70 you have, the slower the inference it gets(due to terrible software atm). A single B70 is almost barely faster than CPU inference. If you have 4 B70, you might as well run interference on CPU and be faster with cheaper DDR5 instead of GDDR6.
adrian_b 3 hours ago [-]
For what you say to be useful, please specify what sowftware you have used with B70, including its version.
Hardware-wise a B70 should be significantly faster than any of the available CPUs at ML inference. If it was not so in your tests, that must really be a software problem, so you must identify the software, for others to know what does not work.
Just ran llama-bench at home with the similar priced AMD AI PRO R9700 32G. The phoronix numbers look extremely low? Probably I misunderstand their test bench. Anyway, here are some numbers. Maybe someone with access to a B70 can post a comparison.
Hopefully, support for the B70 will continue to improve. In retrospect, I probably should have bought a R9700 instead...
jaimie 6 hours ago [-]
"I've no idea why one would use gpt-oss-20b at Q8" - would you mind expanding on this comment?
In that particular model family, the choices are 20B and 120B, so 20B higher quant fits in VRAM, while you'd be settling for 120B at a lower quant. Is it that 20B MXFP4 is comparable in performance so no need for Q8?
Or is the insight simply that there are better models available now and the emphasis is on gpt-oss-20b, not Q8?
Mindless2112 5 hours ago [-]
The parameters in the original gpt-oss-20B model are "post-trained with MXFP4 quantization", so there just isn't much to gain by quantizing to Q8. If you look inside the Q8 model, most of the parameters are MXFP4 anyway.
Though, looking inside my "gpt-oss 20B MXFP4 MoE" model, it looks to also be quantized the same way as the Q8, so that was probably an overstatement on my part.
Still, the Q8 is 12.1 GB and the FP16 is 13.8 GB. Not the ~1:2 ratio you might expect.
ycui7 4 hours ago [-]
At this speed, people end up paying more on electricity than api calls. (California electricity)
magicalhippo 12 hours ago [-]
For reference in case it's interesting to someone, a 5090 on Windows 11 with CUDA 13.1
I was looking into this for LLMs but it's clearly a graphics-processing focused card. The memory bandwidth is too low for that much RAM to be useful in an LLM context. The 5090 I have has the same amount of RAM but far more bandwidth and that makes it much more useful.
Mindless2112 13 hours ago [-]
Compared to a B70, a 5090 is 1x the memory with 3x the bandwidth at 4x the price. Yeah, the 5090 is better, but you're paying for it.
arjie 9 hours ago [-]
On actual market it’s $1100 vs $3200 now, right? I actually got mine at $2200 at cost in the before days.
Mindless2112 9 hours ago [-]
Current lowest price for a new card on Newegg: $949.99 vs $3,699.99.
arjie 9 hours ago [-]
Wow, 5090 prices have exploded. Thanks for looking. I should have known by hardware price intuition is broken.
girvo 13 hours ago [-]
Oh wow, I really would've expected higher memory bandwidth. That's only ~2-3x the little DGX Spark-alike I have to play with. Would've expected more.
askl 1 hours ago [-]
> it's clearly a graphics-processing focused card.
Yes, that's what the G in GPU stands for. It's great to see that there are still manufacturers that understand this.
cmxch 13 hours ago [-]
It’s 32gb for people who can’t go for scalped 5090s but have a 3090 budget.
I have a pair of them with a 9480 and the only thing I have to do is keep the cache happy.
fluoridation 13 hours ago [-]
Eh. Trading CUDA for 8 more gigs seems like bad deal, unless you know absolutely for certain what you want to run will run on it.
cmxch 12 hours ago [-]
Until NVidia prices get better, I’ll build out with the Intel stack and keep the cache (and prompt processing speeds) happy.
As for software, anything that has a SYCL or Vulkan backend, and/or can be Intel optimized (especially to the same degree as llama.cpp) can run well.
arjie 9 hours ago [-]
[dead]
kinow 13 hours ago [-]
For those that use Blender, in their section about Blender:
> We hope that, in the future, there will be real options other than NVIDIA for GPU-based rendering, as it is an area where competition is nearly non-existent.
And Checking opendata.blender.org, a NVIDIA GeForce RTX 4080 Laptop GPU scores 5301.8, while Intel Arc Pro B70 is still at 3824.64.
So there is still a bit more to go before Intel GPUs perform close to NVIDIA's.
embedding-shape 13 hours ago [-]
Also the first section I jumped to :) To Intel's credit, seems they're slowly improving, the section starts with:
> Over the last year or two, Intel has worked to deliver serious optimizations for and compatibility with Blender GPU rendering on its Arc GPUs. Although NVIDIA has long held an advantage in the application, our last time looking at Intel’s cards indicated ongoing improvements. This round of testing is no different. We found that the Arc Pro B70 provided more than twice the performance of the B50, also beating the R9700 by 9%.
keyle 10 hours ago [-]
This is because Blender is in fact using CUDA?
wmf 8 hours ago [-]
Blender supports CUDA, HIP, OneAPI, and Metal. So Intel GPUs are performing poorly using their native API.
Joel_Mckay 9 hours ago [-]
The key feature on intel platforms is the hardware de-noise acceleration (NVIDIA OptiX also works well.) Note, AMD OpenCL works quite well for some renderings, but blender flamenco likes consistent cluster hardware.
For 8k HDR10 media or 3+ screens the rtx 5090 32G model is going to be the minimum card people should buy. Just because you see 4 DP ports, doesn't mean the card can push bit-rates needed to fill an HDR10 display >60Hz.
The Mac Studio Pro unified >512GB ram/vram is a better LLM lab solution (Apple recently NERF'd it to 256GB.) Who cares if a task completes a bit slower, it doesn't matter given the lower error rates... and not costing $14k like an rtx 6000. =3
Great tutorial on getting blender to behave on mid-grade PC and laptops etc. :
Is Intel still making GPUs? I have heard so many conflicting things about will they/won't they stay in the market.
girvo 13 hours ago [-]
They appear to be backing out (for a little while) of consumer cards, but datacentre/workstation/laptop GPUs are still their focus.
numpad0 14 hours ago [-]
Intel always had that habit of starting an internal conflict whenever whatever potential alternative revenue sources start to threaten their internal dependence on x86
lambda 13 hours ago [-]
What do you mean, are they still making GPUs? This is a discrete GPU that has just recently been released, and it's one of the most popular GPUs in its class at the moment, due to 32 GiB of RAM for under $1000, which makes it great for LLM inference.
KronisLV 4 hours ago [-]
> What do you mean, are they still making GPUs?
There was recent talk of them pulling back from the consumer segment, though obviously the leaks have also predicted Battlemage not being a thing so go figure: https://youtu.be/NYd2meJumyE?t=638 (timestamped)
That said, them not releasing a B770 in the consumer segment also sucks, since there are games and use cases that the B580 comes in a bit short for.
giancarlostoro 12 hours ago [-]
Honestly, I dont even care if its slower than just getting a 5090, just being able to run models my 3080 cannot handle would be a welcome change.
throwaway85825 12 hours ago [-]
The B70 would have been the B770 bit it was canceled. Celestial has been canceled too.
jonhohle 12 hours ago [-]
I thought I had read that too and went to look for clarification and found that they’re just moving to a single architecture for their cards. Seemed reasonable.
2OEH8eoCRo0 14 hours ago [-]
I don't know what to believe when it comes to Intel news because they have so many haters.
dismalaf 14 hours ago [-]
They'll always have iGPUs so whether or not they stay in the dGPU market depends mostly on whether or not people buy them. So they might not, whole market seems to be moving to SoCs/APUs/whatever you want to call them.
chao- 13 hours ago [-]
Not only will they always have iGPUs, but also cannot give up on advancing their datacenter AI GPUs (the next being Jaguar Shores). They need both of those far more than consumer or prosumer dGPUs, but that means they are committed to Big GPU work and Small GPU work.
Since they will have both of those big and small "bookends" of GPU architectures, it is a question of whether they see benefits in maintaining an accessible foothold in the midmarket ecosystem. I could make an argument for both sides of that, but obviously the decision is not up to me.
throwaway85825 12 hours ago [-]
They're working with nvidia to use their GPU tiles in mobile products.
tempest_ 14 hours ago [-]
I would like one for the vram but I am sure they will be unobtainable after the initial stock sells out as I assume they were produced before the RAM prices went up.
userbinator 11 hours ago [-]
It should be possible to use the VRAM as extra swap space, when you're not using it for AI or gaming or anything else. 32GB is already more than a lot of computers have as just regular RAM, even sufficient to hold an OS installation:
It is weird that the reviewer does not mention RTX PRO 6000 96GB, but mentioned RTX PRO 5000 72GB. 72GB RTX PRO 5000 is a special order, and much less people are aware of it. RTX PRO 6000 is known by mostly everyone in the LLM world.
I cannot understand why would a tech reviewer do that.
jbellis 12 hours ago [-]
How should I update my simplistic understanding that decode is bw-bound with these results that show the B70 decoding faster than a 4090 (about 50% more bw)?
rao-v 12 hours ago [-]
I doubt you'd get the same sort of result on a modern-ish MOE or dense model via a more standard inference engine like llama.cpp or VLLM. I don't think MLPerf is a reasonable benchmark at this point.
Edit: Here is a simple llama.cpp compare where the token gen results match the rule of thumb.
Or the makers intentionally nerf them, in order to better segment the markets/product lines?
ZiiS 14 hours ago [-]
The drivers often need per game optimisations these will be missing but I doubt Intel would nerf them, just rely on you not paying a lot for RAM the game won't use.
XCSme 14 hours ago [-]
I actually meant it in a different way. I would get it for local AI stuff, but being able to game on it would be a huge plus, otherwise I would need two different machines.
ZiiS 3 hours ago [-]
Much as I want diversity; a 3090 would be a billion times better for games and can probably hold its own for a broader AI workload. Anything other then running highly quantised models that don't fit in 24GB with realativly small contexts.
XCSme 52 minutes ago [-]
A 3090 is what I have now.
But I hope to somehow have 48Gb or 64GB VRAM in a GPU that's also gaming-ready.
I was looking for maybe getting a mac studio for this reason, but I don't think a mac is really good for for gaming.
MrDrMcCoy 3 hours ago [-]
It'll work just fine for gaming. It's what the B770 would have been if it had 32GB RAM and ever got released.
wmf 14 hours ago [-]
They nerf gaming cards to make money on the pro cards. Since this is a pro card it's not nerfed.
wg0 2 hours ago [-]
Can we not have a PCIe card that's ASIC (and isn't GPU) with even DDR 4 or DDR 5 memory (Let's say 128 GB) onboard and being able to shove four of them on a consumer grade motherboard and then being utilized in parallel?
Noob question.
numpad0 13 hours ago [-]
$950 for 23TF fp32? Have GPU performance grew in past 5-10 years at all?
wmf 13 hours ago [-]
Are you comparing against gaming or workstation cards?
numpad0 9 hours ago [-]
1080Ti had >10TF in 2017. Or Titan XP too for that matter. ~10 years ago.
Readerium 6 hours ago [-]
AI workloads are all about memory size and bandwidth not compute
100ms 15 hours ago [-]
These seem amazing for hobbyist, but that TDP given the perf might be an issue deploying a lot of them
zrm 14 hours ago [-]
Its performance is pretty unbalanced. If you're using it for the couple of things that it's good at, the TDP is competitive.
unethical_ban 13 hours ago [-]
It looks like, if one can afford it, the R9700 is worth the extra money.
I read that Intel is getting out of the dGPU space, but then again, their iGPUs are really getting good. I can't understand why they'd give up the space when the AI market is so insane.
timschmidt 13 hours ago [-]
Rumors of their exit from dGPU predate Battlemage. So I wouldn't put a ton of credence to them. But Intel's is quite talented at snatching defeat from the jaws of victory.
yurishimo 13 hours ago [-]
I hope not. They’ve been flip flopping too much and the market needs more dGPU competition.
The team working on drivers is doing a good job playing catch up and I hope intel will continue to invest in cards that focus on graphics workloads and not just on AI inference.
ycui7 4 hours ago [-]
Exiting dGPU for gaming, but staying in the LLM world.
cubefox 13 hours ago [-]
Why are they still using their old Xe2/Battlemage architecture rather than their new Xe3/Celestial? They already used it in their Panther Lake chipset.
It looks like B70 was delayed 1-2 years for some reason.
driverdan 14 hours ago [-]
From what I've read the Intel drivers are terrible and holding back using them for LLMs.
martinald 14 hours ago [-]
Don't think that's true. The drivers are bad (not sure terrible is fair, they have improved a lot) esp for older directx etc games. But Vulkan support is pretty good and that's all you need for LLMs really.
marshray 13 hours ago [-]
I don't know about LLMs, but I tried an Intel card when Ubuntu Wayland couldn't initialize a 2 year old Nvidia. It just works.
lukan 12 hours ago [-]
That is just Linux and politics. Linux wants to force vendors to open source theirs, Intel plays along, Nvidia as the market lead does not, so you have to use their proprietary one, which most distros do not ship by default.
driverdan 11 hours ago [-]
Interesting. I had read that Intel's Linux drivers were far behind their Windows versions. I haven't checked in a few months though.
otherme123 4 hours ago [-]
That is compatible with what the comment you are replying: you don't need much to beat nVidia open drivers for linux. Intel linux drivers might be behind their Windows drivers, still ahead of nVidia's.
nVidia has zero incentives to play open for linux, they release the binary blobs, next to zero docs and support, and you deal with it. The last nVidia card I bought was 20 years ago, and it was so bad for linux (low perf and freezes for the open drivers, manual re-install hell and pray on each kernel update for the binaries) that I switched to ATI. Since then, ATI or Intel always were decent with zero headaches.
999900000999 14 hours ago [-]
Everyone has terrible drivers here aside from Nvidia.
Intel looks like they'll leave the dedicated GPU space, so it's a bit doubtful if the drivers will ever catch up.
reallytD91 6 hours ago [-]
What makes you think Intel will leave the GPU space?
I've seen several stories like this. Which is a shame since Intel offers the best value GPUs on the market.
I guess it's possible they'll still make workstation GPUs while skipping the consumer market.
luckydata 11 hours ago [-]
this review was essentially pointless, they reviewed the card for a ton of workloads nobody in their right mind would pick it for, and left out the only use case where it makes sense. great job?
CoastalCoder 14 minutes ago [-]
You may have a valid technical point.
If you find a friendlier way to phrase it, you may find more people willing to discuss it.
Rendered at 11:28:00 GMT+0000 (Coordinated Universal Time) with Vercel.
For example, Qwen3.5 27B and Qwen3.5 122B A10B have similar average performance across benchmarks. The 122B is much faster to run than the 27B (generates more tokens at the same compute). The 27B, on the other hand, uses ~4x less VRAM at low context lengths (less difference at high context lengths).
Right now, different hardware seems to be suited to different points in the dense vs. MoE balance. On one extreme is hardware like the DGX Spark and Strix Halo which have a lot of memory compared to compute performance and memory bandwidth, and are best-suited for MoE workflows. On the other extreme you have cards like RTX 5090 which have very high performance for the price but rather little memory, and is best suited for dense models.
The Arc Pro B70 seems to be the awkward middle. With 1-2 of these, you can run a ~30B dense model slowly, probably not fast enough to be useful interactively (you'd probably need a 5090 or 2x 3090 for that). Or, you can run a MoE model at high throughput, but probably not enough quality to support agentic workflows that actually use your throughput.
Why can't Intel look beyond this nonsense state of affair and build something with 1TB of RAM or more?
What I am trying to say, I am yet to see anything competitive in the market. Cards very much stalled in sub 100GB region and best corporations can do is throw something to run toy models and forget about it after a week.
I think e.g. AMD missed the boat with 9950x3d2 by limiting memory controller. If it was possible to hook it with 1TB of consumer DDR5 RAM, that would be something to write home about.
But 32GB for a TDP of 230W is perhaps not super interesting. Especially because you probably want to have more than one card. It's a lot of heat. You could use the cards for heating up a building, but heatpumps exist.
Prompt processing or parallel token generation can do a bit more work per memory transfer, as you can use the same weights for a few different calculations in parallel. But even still, memory bandwidth is a huge factor.
B70 runs at 1/3 token output rate of RTX PRO 4500 and consume 3X idle power when do nothing.
Whatever the hell you name it doesn't matter to me, I just want a workstation with one of them bad boys attached to 160GB of RAM for legit inference power!
I've been saving my money not paying for Claude Code so I can run my own agentic coding setup at home on yours. Please don't charge too much for the workstation class card if you can at all manage it. Maybe give us a discount to preorder? Please don't price a regular consumer like me out of the market!
Also, I am speculating integer based models will become hot due to lower memory and power requirements. Will the Xe3P be able to do integer-based math inference to use all that RAM to even greater effect?
Intel wouldn’t decide to do this even to save their own life
It lacked software support the for the primary target application, running LLM. The officially supported vllm fork is 6 version behind mainline. It did not run the latest hot new open models on huggingface. Parallel two of B70 reduce token rate, not improve it. So, the software behind B70 is basically so far behind.
The parent article shows that B70 is faster than RTX 4000.
RTX 4500 is faster than RTX 4000, but it cannot be more than 3 times faster, not even more than 2 times faster.
The parent article is consistent with RTX 4500 being faster than B70 for ML inference, but by a much smaller ratio, e.g. less than 50% faster.
If you know otherwise, please point to the source.
If you have run a benchmark yourself, please describe the exact conditions.
In the benchmarks shown at Phoronix for llama.cpp, the relative performance was extremely variable for different LLMs, i.e. for some LLMs a B70 was faster than RTX 4000, but for others it was significantly slower.
Your 3x performance ratio may be true for a particular LLM with a certain quantization, but false for other LLMs or other quantizations.
This performance variability may be caused by immature software for B70. For instance instead of using matrix operations (XMX engines), non-optimized software might use traditional vector operations, which are slower.
It is also possible that for optimum performance with a certain LLM one may need to choose a different quantization for B70 than for NVIDIA, because for sub-16-bit number formats Intel supports only integer numbers.
At that power consumption, you also end up being more expensive than API calls and many times slower. It starts to feel very stupid to run local interference.
If the client is very keen on privacy, then they can pay for the NVIDIA.
I end up returning my B70s, and bought RTX PRO 6000.
Hardware-wise a B70 should be significantly faster than any of the available CPUs at ML inference. If it was not so in your tests, that must really be a software problem, so you must identify the software, for others to know what does not work.
Tried to use the same model as the article:
llama-bench -m gpt-oss-20b-Q8_0.gguf -ngl 999 -p 2048 -n 128
AMD R9700 pp2048=3867 tg128=175
And a bigger model, because testing a tiny model with a 32GB card feels like a waste:
llama-bench -m Qwen3.6-27B-UD-Q6_K_XL.gguf -ngl 999 -p 2048 -n 128
AMD R9700 pp2048=917 tg128=22
In that particular model family, the choices are 20B and 120B, so 20B higher quant fits in VRAM, while you'd be settling for 120B at a lower quant. Is it that 20B MXFP4 is comparable in performance so no need for Q8?
Or is the insight simply that there are better models available now and the emphasis is on gpt-oss-20b, not Q8?
Though, looking inside my "gpt-oss 20B MXFP4 MoE" model, it looks to also be quantized the same way as the Q8, so that was probably an overstatement on my part.
Still, the Q8 is 12.1 GB and the FP16 is 13.8 GB. Not the ~1:2 ratio you might expect.
5090 gets maybe 100TPS with MTP
Which might not sound like much, but 2months in llm time is a long time, especially regarding support for new hardware like the r9700.
Yes, that's what the G in GPU stands for. It's great to see that there are still manufacturers that understand this.
I have a pair of them with a 9480 and the only thing I have to do is keep the cache happy.
As for software, anything that has a SYCL or Vulkan backend, and/or can be Intel optimized (especially to the same degree as llama.cpp) can run well.
> We hope that, in the future, there will be real options other than NVIDIA for GPU-based rendering, as it is an area where competition is nearly non-existent.
And Checking opendata.blender.org, a NVIDIA GeForce RTX 4080 Laptop GPU scores 5301.8, while Intel Arc Pro B70 is still at 3824.64.
So there is still a bit more to go before Intel GPUs perform close to NVIDIA's.
> Over the last year or two, Intel has worked to deliver serious optimizations for and compatibility with Blender GPU rendering on its Arc GPUs. Although NVIDIA has long held an advantage in the application, our last time looking at Intel’s cards indicated ongoing improvements. This round of testing is no different. We found that the Arc Pro B70 provided more than twice the performance of the B50, also beating the R9700 by 9%.
For 8k HDR10 media or 3+ screens the rtx 5090 32G model is going to be the minimum card people should buy. Just because you see 4 DP ports, doesn't mean the card can push bit-rates needed to fill an HDR10 display >60Hz.
The Mac Studio Pro unified >512GB ram/vram is a better LLM lab solution (Apple recently NERF'd it to 256GB.) Who cares if a task completes a bit slower, it doesn't matter given the lower error rates... and not costing $14k like an rtx 6000. =3
Great tutorial on getting blender to behave on mid-grade PC and laptops etc. :
https://www.youtube.com/watch?v=a0GW8Na5CIE
There was recent talk of them pulling back from the consumer segment, though obviously the leaks have also predicted Battlemage not being a thing so go figure: https://youtu.be/NYd2meJumyE?t=638 (timestamped)
That said, them not releasing a B770 in the consumer segment also sucks, since there are games and use cases that the B580 comes in a bit short for.
Since they will have both of those big and small "bookends" of GPU architectures, it is a question of whether they see benefits in maintaining an accessible foothold in the midmarket ecosystem. I could make an argument for both sides of that, but obviously the decision is not up to me.
https://www.tomshardware.com/news/lightweight-windows-11-run...
I cannot understand why would a tech reviewer do that.
Edit: Here is a simple llama.cpp compare where the token gen results match the rule of thumb.
https://www.reddit.com/r/LocalLLaMA/comments/1st6lp6/nvidia_...
Or the makers intentionally nerf them, in order to better segment the markets/product lines?
But I hope to somehow have 48Gb or 64GB VRAM in a GPU that's also gaming-ready.
I was looking for maybe getting a mac studio for this reason, but I don't think a mac is really good for for gaming.
Noob question.
I read that Intel is getting out of the dGPU space, but then again, their iGPUs are really getting good. I can't understand why they'd give up the space when the AI market is so insane.
The team working on drivers is doing a good job playing catch up and I hope intel will continue to invest in cards that focus on graphics workloads and not just on AI inference.
nVidia has zero incentives to play open for linux, they release the binary blobs, next to zero docs and support, and you deal with it. The last nVidia card I bought was 20 years ago, and it was so bad for linux (low perf and freezes for the open drivers, manual re-install hell and pray on each kernel update for the binaries) that I switched to ATI. Since then, ATI or Intel always were decent with zero headaches.
Intel looks like they'll leave the dedicated GPU space, so it's a bit doubtful if the drivers will ever catch up.
I've seen several stories like this. Which is a shame since Intel offers the best value GPUs on the market.
I guess it's possible they'll still make workstation GPUs while skipping the consumer market.
If you find a friendlier way to phrase it, you may find more people willing to discuss it.