Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

KickStarter System Shock 1 Remake by Nightdive Studios

Joined
Jan 5, 2021
Messages
441
Proton is just Wine/Wine-staging + DXVK + VKD3D + etc prebundled. I'm using wine-staging with DXVK, VKD3D, etc manually installed (+a few codecs that Valve can't legally distribute) and i can run pretty much the same games as Proton. There is no game compatibility monopoly, everything Proton has is fully open source and already available to anyone who wants to spend a few minutes setting them up (personally i even wrote a small script to create my own "prefixes" with these). Proton via Steam is just convenience.

From a purely technical standpoint you are correct, but that's also not how the industry works. It really doesn't matter if it's possible to set up dxvk, wine etc manually. If you're publishing a game that you want people to play, and it doesn't run out of the box with Steams built in Proton compatibility options, you're not going to sell your game (at least not to Linux users).

Whether or not the software is open source, Steams market position gives them a virtual monopoly on the development of games on Linux. They make all the decisions regarding which libraries and programs are bundled with Proton, and that's what every developer that wants Linux compatibility is going to target.

Want your game to run on the average person's Steam Deck? Either target steamOS natively, or target a specific Proton version. They are your only real options.

Aside from a few incredibly rare exceptions, the overwhelming majority of game developers never cared much about Linux support. Many of those exceptions even farmed the ports to external companies, sometimes doing supbar ports (e.g. lesser visual fidelity). And a few years down the line making "native" Linux games to run properly becomes more of a chore with fiddling with ABI changes in the various .so files than trying to get the Windows version to run via wine-staging.

IME Proton has been nothing but positive for Linux. A good Linux native version by a developer who manages their dependencies properly can technically be a better choice, but those were always incredibly rare.

The thing is, native Linux support was slowly gaining traction in 2017-2018. Companies like Feral were being given more and more contracts to port more and more games, even AAA releases.

The Linux ecosystem was small but was getting healthier. Valve had created the Steam Runtime environment which added a lot of consistency and portability to Linux games as well. Obviously it sucked only being able to run a handful of games and it's much nicer now that a significant portion of my library works, but I feel this is worse for the overall long-term health of Linux.

Proton (and wine etc) is and always will be a middle-man between games and the operating system. Because of this it will always have issues with certain games, is always going to be one step behind because Microsoft can drop a new directx version at any time which needs to be supported, and overall trying to funnel windows programs through a compatibility layer will always be a crapshoot. Obviously being able to run games that don't have native Linux ports is superior to not being able to run them at all, but my fear is that "just run it through Proton" is going to become the defacto strategy for "porting" games to Linux for the foreseeable future. This is already happening, with far less games getting official ports because they just add Proton compatibility instead. I see this as a significant problem because of all the issues mentioned above, plus it disincentives distros to actually get their shit together and fix the stupid dynamic linking hell that makes porting to Linux such a pain in the first place.

I've noticed that the developers of games who actually care about real Linux support tend to have a different attitude to creating their game. They stick less to bloated libraries and tend to code more themselves, since big libraries tend to be very os dependant. This gives them several advantages, not just for Linux support (it also protects them against a new version of Windows that may release and doesn't enforce dependencies to libraries from companies that may collapse). By simply shunting everything through Proton I feel like a lot of developers are not learning this lesson, which of course makes games less likely to run through Proton overall because Proton may not support their particular shitty library yet. This is why everyone is using horrible rootkit-level anticheat and garbage DRM all the time. It was a total pain to get that stuff working in Proton and required multiple kernel-level hacks. That might work for now, but it's not a solution, and if we don't fight for proper ports, developers are going to continue to do this until it stops working all of a sudden for whatever reason.

Proton is a very useful piece of software, but it's not a solution. Even now there are games that worked fine 2 years ago that now require specific versions of Proton to work because of changes in newer versions. How bad is it going to be in 10 years? 20? Proton is never going to be as good as a proper native solution and it's so easy and convenient that it's become the default path for most developers to take, since they essentially get Linux support "for free" with little effort. Too bad it enforces the DirectX monopoly and generally results in less reliable software overall. But nobody cares. Linux users are selling out their future because it's nice to have more games working now.

Linux Gaming is never going to be healthy as long as Linux Gaming is just a poorly emulated version of Windows gaming. (yeah I know, wine is not an emulator, shut up). And as long as distros can continue to find excuses to not ensure their libraries are stable, Linux is never going to be a viable os, no matter how many different systems we add to work around this issue (steam Runtime, Proton, flatpack, etc. It's all the same garbage and it all needs to go). Windows has a very stable API to the point where software written 30 years ago can still boot up and run in a significant number of cases. The fact that Linux can't do this is embarrassing and unacceptable, and needs to be fixed, and throwing more compatibility layers at the problem isn't going to solve it. I'd argue Proton and other compatibility layers like it are taking attention away from the problem in a really bad way.
 
Last edited:

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,280
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
From a purely technical standpoint you are correct, but that's also not how the industry works. It really doesn't matter if it's possible to set up dxvk, wine etc manually. If you're publishing a game that you want people to play, and it doesn't run out of the box with Steams built in Proton compatibility options, you're not going to sell your game (at least not to Linux users).

Well, yes, if the game doesn't work on Proton and there isn't a native Linux version then you wont be able to play it. But the chances of having an actual working and properly made Linux version are incredibly low and the chances of Proton working just fine are way higher.

Want your game to run on the average person's Steam Deck? Either target steamOS natively, or target a specific Proton version. They are your only real options.

You don't target specific Proton versions, Steam always uses the latest version of Proton. The goal of Proton is to be able to play all Windows games, if a newer version cannot play an older game that is considered a bug in Proton. Steam does allow you to use older versions because realistically such regressions do happen in practice and as such it is better to give control to the user than let them hope/wait for some newer version that might fix the bug.

The thing is, native Linux support was slowly gaining traction in 2017-2018. Companies like Feral were being given more and more contracts to port more and more games, even AAA releases.

These ports were incredibly rare and for many games

And honestly, Feral's ports weren't that great. In fact when i bought the computer i'm using right now in 2018 i installed Linux as the only OS - and then bought Feral's port of Deus Ex Mankind Divided to play on it. The port had some severe bugs with the visuals (including somehow the FOV being very wrong) and the performance was abysmal. I basically replaced Linux with Windows 10 because of how bad it was (and then a few months later Valve released Proton).

Because of this it will always have issues with certain games

In theory it may have issues but in my experience making games work on Wine-staging and Proton has been a much smoother experience than doing the same with Windows.

Meanwhile i have a bunch of older native Linux games that simply do not launch at all because of the libraries.

Obviously being able to run games that don't have native Linux ports is superior to not being able to run them at all, but my fear is that "just run it through Proton" is going to become the defacto strategy for "porting" games to Linux for the foreseeable future.

If the games work i do not see the issue here really, Wine/Proton is just a binary loader with some extra compatibility libraries. It isn't like you are running a x86 game on an ARM CPU and needing some sort of full system emulation.

I've noticed that the developers of games who actually care about real Linux support tend to have a different attitude to creating their game. They stick less to bloated libraries

Developers who make their own engines these days and make Linux ports themselves are so incredibly rare that might as well not exist. Honestly, i think you'll find more games based on some SNES or Master System or whatever emulator on Steam than games on a custom engine with a Linux port.

Proton is never going to be as good as a proper native solution

Theoretically yes, since anything Proton can use, a native Linux binary can use too. But in practice Proton tends to work better in the long term.

And as long as distros can continue to find excuses to not ensure their libraries are stable

It is not the issue with distros as much as the developers of said libraries and not only there are thousands of said developers that need to be convinced to keep their ABIs stable, many of them do not consider this to be feasibe or even desirable (ever tried to talk about backwards compatibility with a Gtk/GNOME developer? You'll have a better luck trying to teach astrophysics to a brick - and yet these developers not only make a lot of the libraries that a TON of other software rely on but also set the tone for what is acceptable in terms of development practices).

Instead of trying to convince 29833 library developers to stop breaking their libraries every couple of years and then 213748427823 game developers to port their games on Linux and make sure their games work, it is incredibly much easier, way more practical and pretty much the only way that has a chance of succeeding to just work on 2-3 projects that attempt to make Windows games work on Linux.
 

Azdul

Magister
Joined
Nov 3, 2011
Messages
3,476
Location
Langley, Virginia
And as long as distros can continue to find excuses to not ensure their libraries are stable

It is not the issue with distros as much as the developers of said libraries and not only there are thousands of said developers that need to be convinced to keep their ABIs stable, many of them do not consider this to be feasibe or even desirable (ever tried to talk about backwards compatibility with a Gtk/GNOME developer? You'll have a better luck trying to teach astrophysics to a brick - and yet these developers not only make a lot of the libraries that a TON of other software rely on but also set the tone for what is acceptable in terms of development practices).

Instead of trying to convince 29833 library developers to stop breaking their libraries every couple of years and then 213748427823 game developers to port their games on Linux and make sure their games work, it is incredibly much easier, way more practical and pretty much the only way that has a chance of succeeding to just work on 2-3 projects that attempt to make Windows games work on Linux.
Nightdive should have never mentioned Linux or Macs in Kickstarter. I don't know if it is lunacy, ignorance or dishonesty.

Back on the topic - Linux developers and Linus himself has an attitude that breaking binary compatibility and interfaces is 'good' - as it encourages people to open source their code.

Nintendo and Sony have opposite attitude - and they don't want open source code anywhere near their consoles.

If there should be PS5, Switch and Linux version - developer needs to carefully plan since the very beginning what interfaces with the rest of the system and which libraries will be used. Fixing it later for either Linux or console release would be a pain in the ass.
 
Joined
Jan 5, 2021
Messages
441
Back on the topic - Linux developers and Linus himself has an attitude that breaking binary compatibility and interfaces is 'good' - as it encourages people to open source their code.

This is a very, very insane take. All this will mean is that more and more software doesn't work, and what works currently won't work long-term. Even open-source software becomes stale eventually, and needing to update an old program because the library calls changed is horrible. If Linus genuinely believes this, then he's an idiot.

If there should be PS5, Switch and Linux version - developer needs to carefully plan since the very beginning what interfaces with the rest of the system and which libraries will be used. Fixing it later for either Linux or console release would be a pain in the ass.

Yeah. But you're assuming NightDive aren't complete hack frauds.
 

Azdul

Magister
Joined
Nov 3, 2011
Messages
3,476
Location
Langley, Virginia
Back on the topic - Linux developers and Linus himself has an attitude that breaking binary compatibility and interfaces is 'good' - as it encourages people to open source their code.

This is a very, very insane take. All this will mean is that more and more software doesn't work, and what works currently won't work long-term. Even open-source software becomes stale eventually, and needing to update an old program because the library calls changed is horrible. If Linus genuinely believes this, then he's an idiot.
It has its advantages. On Windows side - many companies are stuck on Windows XP (or even Windows 98 SE) - because crucial closed source driver or application was never updated.

Linux defeated commercial Unixes and *BSD partially because it was not held back by closed source binary blobs with dependencies on ancient libraries.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,280
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
Nightdive should have never mentioned Linux or Macs in Kickstarter. I don't know if it is lunacy, ignorance or dishonesty.

I wont disagree here.

Back on the topic - Linux developers and Linus himself has an attitude that breaking binary compatibility and interfaces is 'good' - as it encourages people to open source their code.

No, this is 100% completely wrong, Linus has expressed and even trashed developers in the mailing list for introducing changes that break the kernel interface, he has mentioned many times over the years that new versions of the kernel should never break the userspace and he has expressed his dislike of libraries in userspace that break other programs multiple times.

Like this email from a few years ago that was reported by some sites:

On Sun, Dec 23, 2012 at 6:08 AM, Mauro Carvalho Chehab
<mchehab@redhat.com> wrote:
>
> Are you saying that pulseaudio is entering on some weird loop if the
> returned value is not -EINVAL? That seems a bug at pulseaudio.

Mauro, SHUT THE FUCK UP!

It's a bug alright - in the kernel. How long have you been a
maintainer? And you *still* haven't learnt the first rule of kernel
maintenance?

If a change results in user programs breaking, it's a bug in the
kernel. We never EVER blame the user programs. How hard can this be to
understand?

To make matters worse, commit f0ed2ce840b3 is clearly total and utter
CRAP even if it didn't break applications. ENOENT is not a valid error
return from an ioctl. Never has been, never will be. ENOENT means "No
such file and directory", and is for path operations. ioctl's are done
on files that have already been opened, there's no way in hell that
ENOENT would ever be valid.

> So, on a first glance, this doesn't sound like a regression,
> but, instead, it looks tha pulseaudio/tumbleweed has some serious
> bugs and/or regressions.

Shut up, Mauro. And I don't _ever_ want to hear that kind of obvious
garbage and idiocy from a kernel maintainer again. Seriously.

I'd wait for Rafael's patch to go through you, but I have another
error report in my mailbox of all KDE media applications being broken
by v3.8-rc1, and I bet it's the same kernel bug. And you've shown
yourself to not be competent in this issue, so I'll apply it directly
and immediately myself.

WE DO NOT BREAK USERSPACE!

Seriously. How hard is this rule to understand? We particularly don't
break user space with TOTAL CRAP. I'm angry, because your whole email
was so _horribly_ wrong, and the patch that broke things was so
obviously crap. The whole patch is incredibly broken shit. It adds an
insane error code (ENOENT), and then because it's so insane, it adds a
few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").

The fact that you then try to make *excuses* for breaking user space,
and blaming some external program that *used* to work, is just
shameful. It's not how we work.

Fix your f*cking "compliance tool", because it is obviously broken.
And fix your approach to kernel programming.

Linus

Other developers that work on Linux (in the userspace, not the kernel) do have that attitude but not Linus himself.
 

Azdul

Magister
Joined
Nov 3, 2011
Messages
3,476
Location
Langley, Virginia
Back on the topic - Linux developers and Linus himself has an attitude that breaking binary compatibility and interfaces is 'good' - as it encourages people to open source their code.

No, this is 100% completely wrong, Linus has expressed and even trashed developers in the mailing list for introducing changes that break the kernel interface, he has mentioned many times over the years that new versions of the kernel should never break the userspace and he has expressed his dislike of libraries in userspace that break other programs multiple times.

Like this email from a few years ago that was reported by some sites:

On Sun, Dec 23, 2012 at 6:08 AM, Mauro Carvalho Chehab
<mchehab@redhat.com> wrote:
>
> Are you saying that pulseaudio is entering on some weird loop if the
> returned value is not -EINVAL? That seems a bug at pulseaudio.

Mauro, SHUT THE FUCK UP!

It's a bug alright - in the kernel. How long have you been a
maintainer? And you *still* haven't learnt the first rule of kernel
maintenance?

If a change results in user programs breaking, it's a bug in the
kernel. We never EVER blame the user programs. How hard can this be to
understand?

To make matters worse, commit f0ed2ce840b3 is clearly total and utter
CRAP even if it didn't break applications. ENOENT is not a valid error
return from an ioctl. Never has been, never will be. ENOENT means "No
such file and directory", and is for path operations. ioctl's are done
on files that have already been opened, there's no way in hell that
ENOENT would ever be valid.

> So, on a first glance, this doesn't sound like a regression,
> but, instead, it looks tha pulseaudio/tumbleweed has some serious
> bugs and/or regressions.

Shut up, Mauro. And I don't _ever_ want to hear that kind of obvious
garbage and idiocy from a kernel maintainer again. Seriously.

I'd wait for Rafael's patch to go through you, but I have another
error report in my mailbox of all KDE media applications being broken
by v3.8-rc1, and I bet it's the same kernel bug. And you've shown
yourself to not be competent in this issue, so I'll apply it directly
and immediately myself.

WE DO NOT BREAK USERSPACE!

Seriously. How hard is this rule to understand? We particularly don't
break user space with TOTAL CRAP. I'm angry, because your whole email
was so _horribly_ wrong, and the patch that broke things was so
obviously crap. The whole patch is incredibly broken shit. It adds an
insane error code (ENOENT), and then because it's so insane, it adds a
few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").

The fact that you then try to make *excuses* for breaking user space,
and blaming some external program that *used* to work, is just
shameful. It's not how we work.

Fix your f*cking "compliance tool", because it is obviously broken.
And fix your approach to kernel programming.

Linus

Other developers that work on Linux (in the userspace, not the kernel) do have that attitude but not Linus himself.
He has his own ideas which interfaces should be stable - and which can be changed on a whim.

He's not above committing changes specifically to break applications that relied on something not expilicitly announced as 'stable'.

Of course when GCC tried the same stricte approach to language specification - and broke parts of the kernel that made dubious assumptions - he went on the tantrum. The language committe had to make retroactive specific change to language specification - because apparently C compiler is not allowed to break kernel code ...

Basically - breaking change is correct, necessary and allowed only when Linus says so.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,280
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
He has his own ideas which interfaces should be stable - and which can be changed on a whim.

He's not above committing changes specifically to break applications that relied on something not expilicitly announced as 'stable'.

You'll need to provide some actual examples on that because every single thing Linus has mentioned on backwards compatibility over the years was always about never breaking the userland - he even has the approach that even when the kernel was doing the wrong thing by exposing something, they should still expose the ABI for it even if it always fails so that programs continue to work.

Of course when GCC tried the same stricte approach to language specification - and broke parts of the kernel that made dubious assumptions - he went on the tantrum.

He went on a "tantrum" - and correctly so - because the compiler broke working code. The kernel doesn't even use "standard C" (no kernel can actually do that, all kernels that are "written in C" are actually written in non-standardized compiler-specific C dialects and rely on non-standardized extensions), it uses GCC's extensions.

The language committe had to make retroactive specific change to language specification - because apparently C compiler is not allowed to break kernel code ...

When did the C language committee change C because of the Linux kernel? I do not follow the C language development closely but i do read news about it now and then and i'm certain something like "The Linux kernel forces the C committee to make changes to the C language". The Linux kernel doesn't even use standard C but GCC's dialect controlled via various flags which has a lot of extensions. Aren't you perhaps confusing the C language with GCC's implementation and extensions?
 

Azdul

Magister
Joined
Nov 3, 2011
Messages
3,476
Location
Langley, Virginia
He has his own ideas which interfaces should be stable - and which can be changed on a whim.

He's not above committing changes specifically to break applications that relied on something not expilicitly announced as 'stable'.

You'll need to provide some actual examples on that because every single thing Linus has mentioned on backwards compatibility over the years was always about never breaking the userland - he even has the approach that even when the kernel was doing the wrong thing by exposing something, they should still expose the ABI for it even if it always fails so that programs continue to work.
One example from last month: https://linux.slashdot.org/story/24...t-kconfig-parsers-not-correctly-handling-them Linus does not write unit tests - he just commits the code that crashes application.

Of course when GCC tried the same stricte approach to language specification - and broke parts of the kernel that made dubious assumptions - he went on the tantrum.
He went on a "tantrum" - and correctly so - because the compiler broke working code. The kernel doesn't even use "standard C" (no kernel can actually do that, all kernels that are "written in C" are actually written in non-standardized compiler-specific C dialects and rely on non-standardized extensions), it uses GCC's extensions.
The type was vanilla 'int'. GCC guys told Linus that with +O3 they follow specification to the letter and assume that the source code does not contain data races - and apply optimizations accordingly. If he wants to access 'int' from multiple threads - he should use mutex or switch to C++, use std::atomic and not bother them anymore.

The language committe had to make retroactive specific change to language specification - because apparently C compiler is not allowed to break kernel code ...
When did the C language committee change C because of the Linux kernel? I do not follow the C language development closely but i do read news about it now and then and i'm certain something like "The Linux kernel forces the C committee to make changes to the C language". The Linux kernel doesn't even use standard C but GCC's dialect controlled via various flags which has a lot of extensions. Aren't you perhaps confusing the C language with GCC's implementation and extensions?
Committe written into specification that optimizations based on speculative writes are forbidden - unless compiler can prove that variable is accessed from a single thread only. Which it cannot do in general case.

Vanilla 'int' is still not thread safe - just Linus can assume that it won't have completely random value when reading it without synchronization from different threads.

Committe decided that +5% performance in some edge cases is not worth it. Sadly - it forced GCC to remove optimization instead of standing up to Linus.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,280
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
One example from last month: https://linux.slashdot.org/story/24...t-kconfig-parsers-not-correctly-handling-them Linus does not write unit tests - he just commits the code that crashes application.

This has nothing to do with application backwards compatibility, it is about an internal file used during the kernel build process. The only affected programs would be those that work with the kernel source code and the kernel source code is not something that anyone would or should expect to not change. It isn't part of any interface the kernel exposes.

Also that didn't even break existing applications that worked with the file, some previous commit someone else made removed a tab that was already there because it break their script and instead of fixing their script to handle tabs they decided to change the kernel file. Linus brought back the tabs (which again, were already there) because programs that try to parse text files and parse or skip whitespace are already supposed to handle tabs too.

Basically someone had a bug in their program that was caused by them trying to parse an internal file used during the kernel's build process (nothing to do with any exposed interfaces or with the userland) and instead of fixing the bug in their program they decided to modify that file so it wont cause the bug.

The type was vanilla 'int'. GCC guys told Linus that with +O3 they follow specification to the letter and assume that the source code does not contain data races - and apply optimizations accordingly.

GCC extensions aren't only about what types are used but also how GCC handles the code even if that looks like standard C.

If he wants to access 'int' from multiple threads - he should use mutex or switch to C++, use std::atomic and not bother them anymore.

That'd obviously be impossible because that functionality relies on the kernel implementing them, this is why you can't write a kernel in C using only standard C and you need to use compiler extensions.

Committe written into specification that optimizations based on speculative writes are forbidden - unless compiler can prove that variable is accessed from a single thread only. Which it cannot do in general case.

This is wrong, the C standard does not specify optimizations. Compilers can optimize code if they want as long as they do not affect the program behavior (according to the standard).

Vanilla 'int' is still not thread safe - just Linus can assume that it won't have completely random value when reading it without synchronization from different threads.

It is not the int type that is not thread safe, it is how it would be accessed.

Committe decided that +5% performance in some edge cases is not worth it. Sadly - it forced GCC to remove optimization instead of standing up to Linus.

Again, the C standard does not specify optimizations nor ever did. The optimization you refer to and the reason the kernel is compiled with O2 instead of O3 is exactly because GCC does perform those sorts of optimizations.

And that's the thing: GCC does add optimizations that break code but also it adds flags to the compiler to control them for code (like the kernel) that rely on them not be applied without that affecting other programs. The main issue with GCC is that the optimizations can affect existing programs that worked with previous versions of GCC without the programs themselves explicitly opting in to them.
 

Zarniwoop

TESTOSTERONIC As Fuck™
Patron
Joined
Nov 29, 2010
Messages
18,807
Shadorwun: Hong Kong
Finished it. One of the rare examples of a remake being actually good.

Apart from maybe the music. The original captured that body-horror/cyberpunk feel much better.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom