Why this ?

After a year (a bit more actually!), I’ve decided to give a chance again to this 20 years old new-kid-on-the-block1 that is Wayland on my Plasma desktop, and give my previous entry on this topic a follow-up.

How Do You Do Fellow Kids

The good surprise

Actually, a lot happened during 2025! Some issues and shortcomings I had previously documented have been fixed:

  • Firefox (since release 146) now natively supports fractional scaling, including sharp text rendering, save for the occasional right-click popup menu which sometimes still is a bit blurry.
  • Qt 6.11 (newly released) finally fixes the abysmal scrolling performance issues in LibreOffice which had made it previously unusable on 4K displays.

In general, Plasma now feels pretty snappy when ran on Wayland on my discrete nvidia GPU (and even snappier than on X11 !) while it was stuttering a lot before.

It is (still) not all rosy

There is however still a major regression I experience on Wayland, which is admitedly caused by those deliciously crappy proprietary NVIDIA drivers: VRAM management is AWOL.

This particular issue is being discussed since 2023 (!!) on NVIDIA forums (here for instance) and is tracked by instance through this egl-wayland issue.

What does it mean in practice? Whenever an application requests some GPU buffers, as long as there is still dedicated VRAM available, everything runs fine. But when VRAM becomes full, any such request will fail, meaning any application can randomly crash. This is not how thinkg are supposed to work! Whenever there is some VRAM pressure going on, the driver is supposed to move buffers to host RAM instead to make room for the new request. This degrades performances for sure, but it’s still a vastly better alternative to crashing an entire desktop session … and this is exactly how it is working on frigging X11 … or cough cough Windows for that matter.

Now, you’re going to tell me, this must surely be a corner case occurence. Who in his right mind fills up its VRAM in such a way? Probably some daft gamer dual screening some browsers and video players (hmm …). But there is a totally legit way to trigger it! I’m a big fan of running LLM inference locally, for privacy and ethical concerns, and the rule of the game is to maximize GPU usage to get acceptable performances. And oh boy how many times have I crashed my desktop by being a bit too eager when loading up some models too large for my own good …

Conclusion (for now?)

This nvidia issue is certainly super annoying (I guess my next GPU will be from team red …). But I must admit the whole Wayland situation is actually getting better by the day, and I was a bit too pessimistic previously; I have been running a Plasma Wayland session for a few weeks now, and overall have been pretty happy about it.


  1. Started in 2008 ! ↩︎