• 1 Post
  • 65 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • As you see, there are 1000 different opinions, heh.

    My take is it’s about user patterns.

    Every distro has different maintenance expectations, different tolerance for bugs and keeping stuff up to date and working. That’s the flavor difference: it’s all the same packages just served to you a different way.

    As an example, Arch Linux has an expectation for the user to pay attention to maintenance. Read their excellent wiki. Update frequently, and pay attention to errors and warnings when you do. There is one version of Python, so update your stuff to work with out. The “reward” for being so hands on is stuff getting automatically fixed quickly.

    CachyOS is just a preconfigured version of this, with presets and experimental features tailored for gaming. But it’s largely not divergent from the underlying Arch system: you could switch from an arch install to CachyOS packages with zero fuss.

    Contrast with Ubuntu. It is meant to be more “hands off” with staged and delayed updates. There are many versions of Python present in the same system, so old stuff works without changes. But the consequence is you may have to live with certain problems you run into, or risk breaking your system trying to fix them.

    Fedora is somewhere in between, with the addition of an emphasis on free software. And a consequence of that is, for instance, no first party support for Nvidia. Bazzite builds on top of that by expensively modifying it into a stable platform for gaming, but you’re also dependent on a relatively small group of maintainers.


    So I guess one question is how involved with your computer do you want to be?




  • Thanks for the ideas! Hopefully I can push the graphics up without turning into a pile of lava. I need to figure out how to record graphics power consumption for me to reference to evaluate changes.

    It’s far more efficient to just TDP limit your GPU rather than lowering settings to try and get power consumption (and laptop fan speed) down. It will stick to slightly lower clocks, which is exponentially better since that also lowers voltage, and voltage increases power consumption quadratically.

    Otherwise it will always try to boost to 100W anyway.

    You can do this with MSI Afterburner easily, or you can do it in Windows with just the command line. For example, nvidia-smi -pl 80 will set the power limit to 80W (until you restart your PC). nvidia-smi by itself will show all its default settings.

    I do this with my 3090, and dropping from the default 420W to 300W hardly drops performance at all without changing a single graphics setting.

    Alternatatively you can hard cap the clocks to your GPU’s “efficient” range. For my 3090 thats somewhere around 1500-1700 MHz, and TBH I do this more often, as it wastes less power from the GPU clocking up to uselessly inefficient voltages, but lets it “power up” for really intense workloads.

    FYI you can do something similar with the CPU too, though it depends on the model and platform.


  • Also, you might be able to fix that!

    I clock limit my 3090 to like 1700MHz-1750Mhz with Nvidia-smi (built into the driver) since any faster is just diminishing returns. You might check what “stable clocks” your 3070 runs at, and cap them slightlt lower, and even try an under volt as well.

    Be sure to cap the frame rate too.

    Do that, and you might be able to handle RT reflections and otherwise similar settings without much noise. The hit for just that setting is modest on my 3090 but much heavier with full “low” RT


  • Sometimes you don’t know what you’re missing though.

    As an example, I figured out (on a 4900HS CPU/2060 GPU) that Stellaris and modded Rimworld game ticks are on the order of 40% slower running linux native, and still slower (but less dramatically so) in Proton. There was zero public information on this until I tested it myself.

    As another example, modded Minecraft is dramatically faster on linux.

    They run fine, yeah, but one’s game settings are kinda capped by CPU performance in all these titles. I don’t have to know the difference, but would like to, hence I’m wondering about CP2077 from the opposite side: am I missing out on a boost from linux?






  • You raise an excellent point.

    TBH I am both lazy, and a bit paranoid/afraid of dealing with Nvidia rendering issues (even if using my IGP for desktop work), but it would probably be fine and I’m… just being lazy and paranoid.

    I don’t think it would make it worse for compute work.

    An external 3rd partition does sound appealing, though one quirk is that CP2077 does really like SSDs. I have a slow external SSD, but it still might muddy an A/B test.


  • Thanks! Though it doesn’t mean much without a windows reference :P

    I’m pushing my poor 3090 to 4K with just RT reflections but a bunch of mods, and I’m generally getting over 60 with no framegen (which is my target).

    FYI I found the game actually looks better with most of the RT disabled:

    • RT shadows tend to be blocky and flicker, while raster shadows “miss” more shadows but are razor sharp and stable.

    • RT lighting is neat for, say, reflecting a neon billboard, but I find it often clashes with built in raster lighting. For instance, it turns neon signs into blobs and messed up the Arasaka atrium in the intro.

    • RT reflections look incredible, especially in rain. No downside as far as I can tell.

    • Path tracing is a whole different ballgame my card can’t handle. But (when modded to fix it) it’s apparently extra incredible, and basically disables all the other in game settings.

    Check out the digital foundry video too, which shows some of this







  • 7800X3D, Nvidia 3090, CachyOS, the latest arch kernel with whatever tweaks they have, I assume git Proton and all the distro’s riced settings. On CP2077’s side I’d like RTX reflections and DLSS as the only exotic settings, though I did run a mod that hacks in FSR 3.1 framegen.

    I realize I probably have to test this myself, heh. But from what I gather (and past experience on a laptop 2060 with Linux) is that Nvidia is disadvantaged on Linux in this scenario.