NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
How fast is a macOS VM, and how small could it be? (eclecticlight.co)
fouc 24 hours ago [-]
>Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally.

Good reminder that there's a certain amount of memory tied up with each core (probably mainly page cache and concurrency handling etc).

adrian_b 21 hours ago [-]
As a general rule, also the amount of physical memory installed in a computer should be proportional with the number of hardware threads provided by its CPU.

Besides the fact that the operating system may allocate some memory for each thread, when you launch a multi-threaded application that is able to use all available threads, for instance the compilation of a big software project, it frequently will allocate some working memory in an amount proportional with the amount of working threads.

I have encountered many multi-threaded applications that need up to 2 GB per thread to work well.

This corresponds to having 64 GB for a desktop CPU with 32 threads, like Ryzen 9 9950X.

For the compilation example, I have seen software projects, like Chrome/Chromium and its derivatives, where if you do not have enough memory, proportional to the number of hardware threads, e.g. when you have only 32 GB for a 16 core/32 thread CPU, you must reduce the number of concurrent compilations, e.g. with an appropriate parameter to "make -j", leaving some threads and cores idle, because otherwise you may encounter out-of-memory errors.

embedding-shape 18 hours ago [-]
Compiling flash-attn (Flash Attention) is a another great stress-test for CPU+RAM as just using 16 threads can balloon you into 128GB RAM usage territory already. Same thing with needing to not do too much concurrency when compiling it.
cjbgkagh 18 hours ago [-]
I have this problem with NixOS as one of my build servers doesn’t have enough ram. There doesn’t seem to be a way to know if a compilation is likely to be ram heavy and either use a tagged server with more ram or use few threads on servers with less ram.
Neywiny 12 hours ago [-]
It's an important point. I went from 4c/8t and 32GB to 16/32 and 96GB. Dramatically less memory per thread. Some software (looking at you, Vivado) can take incredible amounts of memory per parallel job thus mandating some projects can only run with a subset of my cores. At least until I stepped up my work laptop to 10.66 GB/thread. That seems to be manageable
4 hours ago [-]
realo 20 hours ago [-]
Yes! I have also observed that with compilation VMs on a big server.
fulafel 21 hours ago [-]
I'd bet for the null hypothesis: the memory behaviour changes would hold if the core count was kept constant and only the VM's memory size was adjusted.
brookst 21 hours ago [-]
Agreed. This is the OS adapting to available memory.

Similarly if you started with 4GB and there was 900MB available for user apps, I expect you could launch apps that consume 1500MB just fine; the OS is leaving enough to launch anything, and making use of unused memory for cache/etc.

dmitrygr 20 hours ago [-]
There is a per-cpu data structure in the xnu kernel, but it is not big enough to tilt the scales when you are talking about RAM in units of gigabytes.
pdpi 18 hours ago [-]
It’s not just the kernel. I wouldn’t be surprised if there’s a fair few userspace services spawning a thread per core.
wutwutwat 21 hours ago [-]
There is some overhead per-core, you're right, but imo this reduction in usage is likely from how the kernel allocates available memory, which is being reduced as well. The kernel will keep read caches around longer with more memory, it'll prefer to compress memory instead of swap to disk if it has more, it'll purge/cleanup reclaimable memory less often with more memory, etc. It even scales its internal buffer sizes and vnode tables depending on total memory.

All good things imo, it dynamically makes the most of what is available, at the expense of making it harder to see a true baseline of hard min requirement to operate.

Fun things to check, `vm_stat`

$ vm_stat Mach Virtual Memory Statistics: (page size of 4096 bytes)

Pages free: 230295.

Pages active: 1206857.

Pages inactive: 1206361.

Pages speculative: 31863.

Pages throttled: 0.

Pages wired down: 470093.

Pages purgeable: 18894.

"Translation faults": 21635255.

Pages copy-on-write: 1590349.

Pages zero filled: 11093310.

Pages reactivated: 15580.

Pages purged: 50928.

File-backed pages: 689378.

Anonymous pages: 1755703.

Pages stored in compressor: 0.

Pages occupied by compressor: 0.

Decompressions: 0.

Compressions: 0.

Pageins: 832529.

Pageouts: 225.

Swapins: 0.

Swapouts: 0.

edit: no code fence markdown support or am I doing something wrong?

schrodinger 17 hours ago [-]
Single inline backticks like `this` aren't recognized (although still useful in my opinion, they just don't change the rendering).

Triple backticks also aren't recognized. However, if you indent by I believe 4 spaces, it formats it in a fixed width font presuming it's code.

Let's try (4 spaces):

    func main() {
        fmt.Println("Hello, HN!")
    }
None for comparison:

func main() { fmt.Println("Hello, HN!") }

wutwutwat 11 hours ago [-]
Seems I missed the window to be able to edit my message, but I'll remember this info for next time, thanks!
Havoc 23 hours ago [-]
Got a M5 air recently - my first dive into MacOS land so trying to figure this out too.

Seems essentially impossible to get:

* pytorch

* GPU acceleration

* VM/container like isolation

The virtio-gpu layer gets closest but seems to only pass through graphics GPU not compute GPU so no pytorch

plufz 22 hours ago [-]
I need this too, and looked quite a lot on it a year ago. I haven’t had time to check out the recent developments with Docker Model Runner (vllm-metal) or podman libkrun. Did neither of those work for you?
Havoc 21 hours ago [-]
vllm-metal isn't GPU access but rather a openai compatible end point which I can already do via lm studio endpoint over network

>podman libkrun

Haven't tried it but research suggests its really shaky still. podman libkrun exposes vulkan while torch expects mps on macs. Sounds like one can force vulkan but that's apparently slow and beta-ish?

emmelaich 21 hours ago [-]
I got torch to run in a Cirruslabs Tart instance.
Havoc 18 hours ago [-]
By "Instance" do you mean their cloud platform?
emmelaich 9 hours ago [-]
Nah, just locally on my macair.

TBF, I only got to the point that using device=mps_device didn't fail. I used Sonoma at the time and the image for the vm was ghcr.io/cirruslabs/macos-sequoia-xcode:16.2-beta-3. Python 3.12, as well, because torch didn't work with later versions.

   import torch
   mps_device = torch.device("mps")
   print('device is', mps_device)
   x = torch.ones(1, device=mps_device)
   print(x)
adastra22 2 hours ago [-]
brew install tart
binsquare 17 hours ago [-]
[dead]
mgaunard 23 hours ago [-]
My only experience with VMs on macOS is colima+docker, and it's relatively painful and inefficient (but usable).
woadwarrior01 21 hours ago [-]
Try Apple's container CLI. I moved a project of mine from colima+docker to it relatively easily, a couple of weekends ago.

https://github.com/apple/container

highpost 17 hours ago [-]
Here's an example of how to build a simple Alpine Linux container using Apple's containerization CLI. It also demonstrates how to connect to the container through Tailscale SSH using a Tailscale auth key stored in Apple Keychain:

https://github.com/highpost/tailscale-macos-container

sagarm 17 hours ago [-]
Does this project aim for docker cli and api compatibility? Searching for Docker on that page yields no results. Though in their example, they do show an example of a Dockerfile referencing docker.io without shame.

Typical Apple behavior, I guess, but grating to see in a OSS tool.

17 hours ago [-]
troad 9 hours ago [-]
This is a weird take, imho. Should they feel shame for using Dockerfiles in their OCI-standard-compliant tool? Would you be happier if they introduced subtly incompatible Applefiles?

Why are they obliged to emulate the Docker CLI? This limits them to just shadowing someone else's product. Just use Docker if you want their CLI/API, it uses the same virtualization framework under the hood on Macs.

copperx 17 hours ago [-]
I'm curious to know what kind of project is macOS exclusive?
troad 9 hours ago [-]
You're surprised that a project by Apple Inc that is basically a wrapper around the Mac virtualisation framework [0] is Mac exclusive?

[0] https://developer.apple.com/documentation/virtualization

pram 16 hours ago [-]
container is really good, ive been using it to sandbox some CLI tools and it starts up in less than a second
ngai_aku 15 hours ago [-]
AFAIK no support for Compose though
yokoprime 20 hours ago [-]
Thank you for this, will check it out!
embedding-shape 23 hours ago [-]
Recently got a Mac Mini for local CI purposes (together with Forgejo Actions), took a broad look at the ecosystem and decided to just roll with "build on host" instead. Setting up signing/notarization just looked like an insurmountably task together with isolating it from the host, even with agents. At least the macOS builds are really fast now and the signing/notarization just ~200 lines of Bash...
latexr 23 hours ago [-]
> the signing/notarization just ~200 lines of Bash

200 lines?! That’s two orders of magnitude too many. What exactly are you doing that you need so such code for signing and notarisation?

embedding-shape 21 hours ago [-]
From the top of my head, unlocking the keychain, finding the right identity, notarizing two parts, the binary itself and the .dmg that the .app ships in and some other stuff I'm sure. Can do a deeper look in a bit when I can. Most of the hassle is because it's 100% unattended and I had to do stuff to avoid GUI-prompts for passwords/unlocks, and that the Forgejo Runner has a different security context.
latexr 1 hours ago [-]
> unlocking the keychain, finding the right identity

You don’t need to do that, you can give options to the CLI to define what profile to use.

> Most of the hassle is because it's 100% unattended and I had to do stuff to avoid GUI-prompts for passwords/unlocks

I have a shell function to which I point my code and it compiles, signs, and notarises it without any more intervention, GUI or password prompts, and I’m pretty sure signing and notarising are literally two lines.

Unfortunately I’m not at my computer now or I’d paste them, but from your description that script is definitely too long.

saagarjha 38 minutes ago [-]
I assume you're using notarytool but I doubt that it will work unless you have your keychain unlocked
hamandcheese 20 hours ago [-]
This matches my experience. Keychain + fully unattended increases the complexity and adds a bunch of landmines that need to be dodged (e.g. GUI prompts like you mentioned).
yohannparis 23 hours ago [-]
Could you share your recipe please ? I’m interested
isityettime 20 hours ago [-]
OrbStack is pretty good. I don't find it inefficient, really.
CraigJPerry 16 hours ago [-]
OrbStack is impressive on the performance and energy efficiency fronts. I'm not aware of anything that comes close. But they're doing something funky under the covers. You can't just start any OS in a VM. It has to be somehow mangled to suit their VM. Thankfully NixOS is available so I'm fine for my use cases. It's still remarkable how efficient it is.
isityettime 16 hours ago [-]
Yeah, it's like WSL. It starts just one VM and then your individual "machines" are LXC containers underneath. If you peek at the vendor-supplied file your NixOS OrbStack Machine includes you can see some of it.

They're constantly doing other optimizations in other ways, too. But that's the one you were pointing at, I think.

mgaunard 15 hours ago [-]
That's also what Colima does.

OrbStack isn't open-source though and I can't justify buying a license for every single person in my company just for something functionally equivalent but performing better.

These kinds of things should just be provided by Apple as a first-class thing.

dhruv3006 22 hours ago [-]
nottorp 1 days ago [-]
> Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used

But... if you start applications inside your VM it will want the full 8 Gb you've allocated not the 5 Gb it uses at startup?

stingraycharles 1 days ago [-]
I don’t assume that macOS virtualization is advanced enough to support memory ballooning, or is that not what you’re referring to?

Edit: I stand corrected!

pyth0 24 hours ago [-]
I don't assume anything either, but a single Google search is enough to dispel that [1]

[1] https://developer.apple.com/documentation/virtualization/vzv...

saagarjha 25 minutes ago [-]
Not supported on macOS
sgt 24 hours ago [-]
macOS is generally pretty amazing at efficient memory usage and VM (virtual memory subsystem) handling. So even a 8GB machine can run pretty impressive workloads without having the user think the machine is underpowered.
stingraycharles 20 hours ago [-]
Important caveat: that’s mostly the case for desktop workloads when you’re multitasking a lot, and not as much for server workloads where you actually need all memory.
p_ing 23 hours ago [-]
Not really. Larger page sizes mean more potential for wasted memory and it has had a long standing memory leak in some core component to where even Calculator can cause an OOM event.
jdiff 22 hours ago [-]
GP is pretty accurate in my experience. Up until last year I was still running an Intel MacBook Pro with 8GB of RAM and successfully multitasked with Blender, Illustrator, Unity, VS Code, and Firefox quite often. The math doesn't make sense, but all stayed responsive even with frequent hops between them. The only OOM events I ran into were memory leaks from Firefox, I believe from an extension.
p_ing 21 hours ago [-]
There's nothing particularly interesting about that. Linux distro-of-your-choice can run the equivalents fine, as can Windows.

Browse /r/macos if you dare to wade into the uninformed cesspool; it's full of OOTB apps causing OOMs (among 3rd party apps) with the past at least two major versions of macOS.

jdiff 21 hours ago [-]
I think there is something interesting there. I was running lighter workloads on similar RAM when I daily drove Debian and was frequently brought to my knees by swapping to death. I had to make conscious choices and manage my RAM usage to avoid it, and still occasionally got T-boned by something I overlooked. I have never had to worry about that with macOS.

I admit I don't have much experience with how Windows handles constrained memory since XP, and XP was abysmal at it just by virtue of being far more bloated than an equivalent Linux distro. It's certainly far more bloated nowadays, but maybe it handles memory pressure better.

None of this should be construed to say that macOS doesn't have serious issues or that it's not in dire need of a Snow Leopard-esque "0 new features" release. That's tangential to its memory handling, where I haven't seen the issues you describe.

p_ing 21 hours ago [-]
Even NT4 handles memory pressure than modern day Linux. It's just not a fair comparison; Linux has never dealt with userspace OOM well.

As for macOS...

https://old.reddit.com/r/MacOS/comments/1njf1aj/bravo_apple_...

https://old.reddit.com/r/MacOS/comments/1nxh08n/impressive_m...

https://old.reddit.com/r/MacOS/comments/1jo5pnq/passwords_ap...

https://old.reddit.com/r/MacOS/comments/1gkwxe4/how_is_memor...

https://old.reddit.com/r/MacOS/comments/1seq0ij/freeform_has...

There are _plenty_ more. There is some fundamental library leaking given the range of impacted apps.

sgt 20 hours ago [-]
Seeing there are thousands running those apps (incl. Freeform) without memory leaks, it could be something else at play here.
p_ing 20 hours ago [-]
It's quite clearly a bug and likely not one easy to diagnose or reproduce given the length of time the bug has remained in macOS. Or a fix would be a drastic breaking change.

Or Apple doesn't really care, though I doubt that's the case.

saagarjha 37 minutes ago [-]
It seems rather easy to diagnose actually any of those users could use the heap profiling tools that ship with the OS
20 hours ago [-]
sgt 20 hours ago [-]
I mean if there are faulty apps, but where do you get this idea from? The amount of IDE's, docker containers, all kinds of stuff you can run on macOS in just 16GB is astounding. And I've used this OS on the desktop for 23 years.
p_ing 19 hours ago [-]
It's not really that interesting in the landscapes of OSes; modern (or even ancient) Windows and Linux distros have been doing these tasks simultaneously in one form or fashion since 16GiB was seen as a lot of RAM.

See my other post for just a tiny amount of references to OOTB faulty apps.

sgt 17 hours ago [-]
My experience along with thousands others (incl 100 other Macs at work) is that of stability. You're saying the opposite based on some Reddit threads. Not sure what your intentions are. Yes, modern OS's might have issues in runtime, but of the top 3, I am pretty sure macOS is the most stable. Linux wins on the server side though.
p_ing 17 hours ago [-]
I'm happy for your personal experience; it clearly doesn't jive with the numerous threads on the macos reddit forum for first and third party apps causing OOM issues (which macOS ungracefully handles, unlike NT).

> Not sure what your intentions are.

This is just a weird statement.

> Yes, modern OS's might have issues in runtime

/All/ modern general purpose OSes have issues at runtime. Every last one of them. macOS isn't without it's significant UX and other faults. It's OK to acknowledge them; this isn't a religion.

sgt 2 hours ago [-]
Looking at your comment history - it does seem you have an agenda. I don't get it. Yet your references are mostly anecdotes from random forums that back your claim. You'll find that for any OS or any piece of software if you go looking.
nottorp 24 hours ago [-]
What will that help with if the host and guest combined need > physical ram?
jdub 23 hours ago [-]
If guest memory can be reclaimed, it doesn't need to be paged to disk once you hit RAM contention. It's mostly saving accounting overhead, but it'll have some effect on latency, which you're more likely to perceive under contention.
nottorp 19 hours ago [-]
But if it can be reclaimed it's not actually needed. So i'd find the minimum amount of configured ram a mac os VM can boot with more significant than the actual usage while booted but doing nothing.
nasretdinov 24 hours ago [-]
Honestly macOS probably can go much lower than that if you turn off some stuff that's not strictly necessary for a VM. The first iPhones only had 128 MiB of RAM and they ran a trimmed down version of macOS Tiger I believe. It's just that RAM has been quite abundant so far, so there was no real reason to try to trim it down, but it's definitely possible, and probably not that hard either, we just need to start trying again :)
Therenas 20 hours ago [-]
Well early iPhones did not have app multitasking, so that‘s quite the difference. Any app was killed when when closed.
selectodude 18 hours ago [-]
Yes it did. You just couldn’t use it. I could send a text message while listening to music. Sometimes the music would crash due to OOM.
felixding 7 hours ago [-]
Maybe I’m nitpicking but there is no such thing as “macOS Tiger”. It’s called Mac OS X at the time so it’s Mac OS X Tiger.
rurban 18 hours ago [-]
I think I got the smallest:

    $ podman image list | grep cross
    docker.io/gotson/crossbuild            latest      d96ea9b7054b  3 years ago   6.71 GB
used to cross-build to darwin.
sudo_cowsay 6 hours ago [-]
How do you VM it up? What tool do you use?
adastra22 2 hours ago [-]
Apple's built-in virtualization framework. For macOS guests, tart is probably the best out there. Apple's own `container` CLI tool for linux/docker-like containers.
collabs 22 hours ago [-]
I was hoping to see the bare macOS with all the applications removed as much as possible, no graphical user interface, just the bare minimum to boot, login as a user, and write hello world dot txt with a text editor. Or maybe some command line apps? Or is it no longer macOS at that point?
jitl 21 hours ago [-]
You can boot regular macOS directly to a root terminal in “Single User Mode”. This was easier on Intel macs of yore but is also possible on M1+

Below content from https://eclecticlight.co/2020/11/28/startup-modes-for-m1-mac...

Launch 1 True Recovery, open Terminal, then run “bputil -a” (without the quotes) to downgrade system security and allow for more boot arguments. You might need to restart after this step.

Then, run [nvram boot-args=”-s”] (without the square brackets). Restart to launch Single User Mode.

Once in Single User Mode, run these commands (in the following order) to mount the root volume group:

1. mount -P 1

2. /usr/libexec/init_data_protection

3. mount -P 2

Future restarts will always launch Single User Mode first. To stop launching Single User Mode, run [nvram boot-args=“”] (without the square brackets).

To restore your system to full security, run “bputil -f” (without the quotes). If you choose to run that command in macOS, prefix “sudo” to the beginning.

hmry 22 hours ago [-]
"I'd just like to interject for a moment. What you're referring to as macOS, is in fact, macOS/Darwin, or as I've recently taken to calling it, macOS plus Darwin."

"What you're referring to as Darwin, is in fact, Darwin/XNU."

"What you're referring to as XNU, is in fact, BSD/Mach."

I seem to remember it being possible to run macOS-less Darwin several years ago, not sure if that's still possible or if Apple has modified it so much at this point that it's useless without at least some macOS components.

Terretta 21 hours ago [-]
> several years ago

2024, maybe? needs some renewed interest perhaps:

https://www.puredarwin.org/

chuckadams 21 hours ago [-]
Needs someone to pick it up: its project leader passed away last year.
colechristensen 21 hours ago [-]
https://github.com/apple/darwin-xnu

Apple stopped updating this 5 years ago.

I remember getting it to boot once long ago but I didn't have anything to actually do with it.

doubled112 21 hours ago [-]
Looks like it is still getting updates and has moved here: https://github.com/apple-oss-distributions/xnu
colechristensen 20 hours ago [-]
I now think of things in terms of token budget. I put my MacOS VM aspirations on the back burner because the effort was taking up 100 GB of space and I made poor choices when it came to laptop specs. Now I'm thinking why not rebuild XNU but I have other things I'd rather spend the tokens on. I don't want to delay other projects so I'm giving up something stupid and fun.
yokoprime 20 hours ago [-]
Kind of a random question, but would it be feasible to intune enroll a macOS VM as a personal device?
bakoo 6 hours ago [-]
Maybe, but then likely only as a BYOD. A company owned enrollment setup requires linking up with Apple Business Manager.
jzer0cool 17 hours ago [-]
Is is possible to run macos on pc? Or at least dev in some way on PC for the mac.
userbinator 9 hours ago [-]
It's called a Hackintosh; there's plenty of information on that.
bigyabai 17 hours ago [-]
You can boot into macOS with QEMU, but you won't have hardware-accelerated graphics or a handful of other features.
copperx 17 hours ago [-]
Which features? Apple Pay?
MBCook 15 hours ago [-]
That requires the Secure Enclave, so I suspect that’s one of them.
danek_szy 15 hours ago [-]
iMassage and FaceTime too among others ;)
JasonHEIN 20 hours ago [-]
I am so curious why no one make an env for agent specfic for macOS. Like the agent spawn in mac env
dieulot 1 days ago [-]
I'm wondering if the Xcode simulator (without Xcode running) performs as well, my 2020 Intel MacBook Air has been incapable of running Safari in iOS smoothly for nearly all its life.
jitl 21 hours ago [-]
Macbook Neo should run rings around any Intel Air: Geekbench shows it at 250% the score of 2020 Intel Air.

https://browser.geekbench.com/v6/cpu/compare/17022784?baseli...

MBCook 15 hours ago [-]
My M1 Air, which was my personal Mac, generally stomped my work MBP 2019 with an Intel chip.

The difference between the absolutely silent M1 and the hairdryer Intel was staggering.

I’m sure you’re completely right.

vessenes 20 hours ago [-]
You’re going to love that newfangled M1 chip.
llm_nerd 20 hours ago [-]
"We might hope that macOS would process AI tasks using the CPU and GPU rather than the neural engine, when running in a VM."

That specific Geekbench test is to measure the ANE performance, which they did by setting the CoreML run to cpuAndNeuralEngine. They could have set it to all and it would use any hardware available, but that would be counterproductive to a test that hopes to measure the ANE, no?

And note that there is no "just ANE" option. In this case it is probably the virtualized CPU side of the equation that's yielding the massive slowdowns for int8 and quantized runs.

The ANE isn't the problem here.

https://dennisforbes.ca/blog/microblog/2026/02/apple-neural-...

shawryadev 22 hours ago [-]
[dead]
hankerapp 19 hours ago [-]
[dead]
vk6flab 23 hours ago [-]
[dead]
aykutseker 21 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:57:12 GMT+0000 (Coordinated Universal Time) with Vercel.