Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Which programming language did you choose and why?

Hirato

Purse-Owner
Patron
Joined
Oct 16, 2010
Messages
3,935
Location
Australia
Codex 2012 Codex USB, 2014 Shadorwun: Hong Kong
To this day, AMD's official drivers cannot draw lines using 1.x paths without leaking a shittonne of memory.
Do you have any example of this? All of my tools (and i think Blender 2.79 too - which i'm using myself - though i'm not 100% sure) use lines with the glBegin/glEnd stuff all the time and i do not remember ever having any memory issues (i'm using AMD). Line drawing is much slower than on Nvidia hardware though, but that is another issue.

You can try an old(er) version of Sauerbraten, and then just activate wireframe mode when you're in the map editor.
Newer revisions either use a Core profile (and emulate the glBegin/glEnd with mixed array buffers), or force a GLSL path on AMD's official drivers to work around the issue (and a whole slew of other bugs involving miscompiled shaders and depth issues).
Though I don't think the workaround will trigger anymore, since AMD changed the Vendor string from "ATI" to AMD" after its implementation.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
Yep, I came to more or less the same conclusion that it's inevitable. However, the OS itself uses the same graphics hardware somehow to achieve superior mouse cursor behaviour at the OS level. I'm talking about the scenario when you drag something with the mouse on the screen; when dragging OS windows (or even something in an old win32 GDI app), the "thing" you're dragging follows the mouse cursor instantaneously, while in your OpenGL app there's always some lag.

There are a few reasons for this. I think if you just drag things the WDM will sync mouse updates to vsync to make things appear synced, but this only happens for moving - if you resize a window you can clearly see it lagging behind the mouse cursor.

Another issue is that everything in WDM is double buffered: you draw to an offscreen texture that is then used (at a later point) to draw the final screen on image. If your windowed application (like pretty much all OpenGL applications) is double buffered then what you get is triple buffering (if you are running fullscreen then the double buffering is all you have).

One way to avoid this is to try and use single buffering in windowed mode and at the end of your frame use glFinish. AFAIK the compositor will not pick up your frame changes before you call this so you are essentially doing double buffering with the DWM buffer as your back buffer.

Another way that may work (didn't try it myself) is to obtain the Direct3D surface directly using the DWM API and then use WGL_NV_DX_interop2 (despite the name it should also work in recent AMD GPUs) to render directly to it using OpenGL.

Alternatively you can also complain online that Microsoft forcing the compositor always on in Windows 8 was a stupid idea that only happened for their "Metro apps experience" so that phone-like UIs can be done on the desktop and do not provide anything of value. This is my favorite approach and what i do myself :-P.

Hirato:
Maybe it was a Sauerbraten bug? Or an old bug? AFAIK AMD had major rewrites in their OpenGL drivers some time around late 2000s/early 2010s. It doesn't sound like something that should concern anyone today anyway.
 

Hirato

Purse-Owner
Patron
Joined
Oct 16, 2010
Messages
3,935
Location
Australia
Codex 2012 Codex USB, 2014 Shadorwun: Hong Kong
Hirato:
Maybe it was a Sauerbraten bug? Or an old bug? AFAIK AMD had major rewrites in their OpenGL drivers some time around late 2000s/early 2010s. It doesn't sound like something that should concern anyone today anyway.

Doubt it, since neither nvidia, intel, or Mesa3D demonstrated the bug despite using the exact same shaders and pathways.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
Doubt it, since neither nvidia, intel, or Mesa3D demonstrated the bug despite using the exact same shaders and pathways.

It could have been some old driver bug then.

Regardless, it isn't something that exists anymore. I ran a quick test with a viewport control i wrote that renders a grid with ~1k lines and let it run for a while. It was running at ~340fps and the entire time the VRAM utilization was at 99MB (that is VRAM utilization system-wide). I'd bet that if there was some leak, considering the number of lines and the framerate, even if there was a single byte lost per line, i should see at least a tiny increase.
 

Hirato

Purse-Owner
Patron
Joined
Oct 16, 2010
Messages
3,935
Location
Australia
Codex 2012 Codex USB, 2014 Shadorwun: Hong Kong
Doubt it, since neither nvidia, intel, or Mesa3D demonstrated the bug despite using the exact same shaders and pathways.

It could have been some old driver bug then.

Regardless, it isn't something that exists anymore. I ran a quick test with a viewport control i wrote that renders a grid with ~1k lines and let it run for a while. It was running at ~340fps and the entire time the VRAM utilization was at 99MB (that is VRAM utilization system-wide). I'd bet that if there was some leak, considering the number of lines and the framerate, even if there was a single byte lost per line, i should see at least a tiny increase.

A key point I think are ASM shaders.
I just remember us running into it in 2009, and it rearing its head again around 2013 when the vendor string changed to AMD (I see it's back to ATI now); the workaround at the time was to use a GLSL shader instead if the vendor contained ATI.

As far as I know, it was still an issue when the codebase finally dropped FFP support and the ASM Shaders and moved to a core profile around 2015/2016.

I can't reproduce it now using the old builds from 2009, (2010 and 2013 builds enable the workaround cause: ATI), so I guess it either got fixed, or I don't understand the cause properly.
You can check the 2010 and 2013 builds for `ati_line_bug` in rendergl.cpp and shader.cpp - it's the trigger for the workaround.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
Hirato:
Hm, perhaps. But there isn't really much of a reason to use ASM shaders nowadays anyway... there was a very small timeframe during the mid-2000s that they made sense, but GLSL was a better idea anyway.

(and yes, i'm not exactly fan of Vulkan using SPIR-V, though at least that seems to be in a format that can be optimized by the driver)
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,427
Location
down under
Codex+ Now Streaming!
Another issue is that everything in WDM is double buffered: you draw to an offscreen texture that is then used (at a later point) to draw the final screen on image. If your windowed application (like pretty much all OpenGL applications) is double buffered then what you get is triple buffering (if you are running fullscreen then the double buffering is all you have).

Thanks man, that was informative! In my specific case, probably the best thing is to just lower my expectations (or complain to Microsoft :)) I just grew up with a C64 and Amiga 500, so any kind of lag or not 100% perfect vsynced scrolling at 60fps annoys the hell out me. But then I guess this is the cost of abstraction, everything was much simpler back then.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
FWIW this isn't really an issue with abstraction, it is just a shitty idea :-P. It could work if GPUs did the compositing themselves during scan out (like they do for the mouse cursor) and applications wrote directly (through the OS) to a GPU resident buffer, but at least current GPUs do not work like that. At best they provide some overlays but they are too limited for something as general as a desktop compositor and they're mainly used to display volume controls and such when running games in fullscreen without having the game go through the compositor.

But as long as you have multiple applications running at the same time, you do not want to have any application "take over" the screen (because that would be a problem if a game took over and then promptly hanged/crashed) so you can't have them synchronize themselves with the compositor updates and you have the composition being done by the OS itself, then it doesn't matter how low level you go, you'll have lag at some point due to the need for applications to run independent from each other and the compositor.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
TBH I've been expecting them as free software for a while and it seems I was wrong !

OpenGL won't go anywhere because it's a standard (albeit a bit shitty :) ) and lots of proprietary software use it and won't migrate for decades.
I think we'll keep seeing proprietary extensions and standard evolutions in the future.
It's probably a good choice if you only want to only do Linux, Win, Mac.
Vulkan might be more verbose and complicated but it will pay off in the future, IMO.

It's still used in games, for example X-Plane.

I'm personally not using OpenGL because I do high performance 2D graphics. If you render 50,000s of 2D polygons there are more efficient rendering APIs.

For 3d I would go with Vulcan now because DirectX is proprietary, OpenGL is old and it looks like Vulcan wont ever go away.
 
Joined
May 19, 2018
Messages
415
Well, i wouldn't recommend someone to learn OpenGL 1.3 :) due to absence of GLSL, which it came in OpenGL 2.0 as GLSL 1.1 which version i still use as well.

GLSL is useful but when you're learning i think it is better to have things on screen as fast as possible and learn things in a step-by-step fashion, so personally i'd suggest learning how to put some very basic models on screen first - something like making a planetary system with planets rotating around a "sun" and some of them having some moons and then having a light and a texture. This will teach both general graphics concepts like how matrices can be used and combined (without needing to deep dive into how they actually work at that point) and some OpenGL stuff that will be used later (creating objects - in this case texture objects - and deleting them, loading data into them, binding them, etc).

That actually sounds kinda fun to do. Not really related to OpenGL, but I’m learning C++ now and planning on messing around with the Unreal engine (think it’s basically free license to use if it’s personal project and not selling anything). Think I could make some pretty stuff with that.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
Yes you should learn C++. I used to do a lot of stuff (DotNET) but since I discovered Qt some years ago I'm doing most of my work with C++ now, and will probably never go back.

I also plan to do at least a short experimentation with Unreal 4. If you start a thread about Unreal 4 I might join in at some point. We had one about Qt and it helped me greatly in the beginning,
 

Chris Koźmik

Silver Lemur Games
Developer
Joined
Nov 26, 2012
Messages
414
A very, very long term perspective note. I went for C++ during the 90s on the Amiga (well, actually it was C back then, but code portability wise it's almost C++). Thanks to this choice, I never really had to rewrite anything (well, except the engine of course). Actually, I know I'm using a tiny part of the code in Legends of Amberland (2019) which was written in 2001, that's almost 20 years old code that still works :D That's a nice advantage of using the industry standard language instead of the current most popular one.

Similarly with the engine. If I had chosen to go for one of the popular engines back then I would end up with Flash or Ogre3D today and I would need to start from scratch (several times). Thanks to using my own engine (mind it, it's not easy to maintain your own engine and I'm not sure I would recommend it :D) I was able to gradually reiterate and rewrite the engine and still keep the game's logic more or less intact.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
I'm personally not using OpenGL because I do high performance 2D graphics. If you render 50,000s of 2D polygons there are more efficient rendering APIs.

OpenGL is perfectly fine for rendering 50.000 polygons, here is all of Quake's textures (or, well, their remakes from the Quake Retexturing Project) repeated several times to draw 50.000 quads (100.000 triangles) at 258 fps:

vxD3QXW.png


Depending on what exactly you are using these polygons for you may manage to get even faster results, but i wrote this in ~30 minutes so even with barely any effort you can have very fast results with OpenGL.
 

vlzvl

Arcane
Developer
Joined
Aug 7, 2017
Messages
191
Location
Athens
OpenGL is slow when you are making the beginner one-draw-one-call mistake which adds a lot of CPU overhead, plus the state changing between those calls adds an additional cost, even for 2D (projected) drawing.
If you batch everything by texture, color, material etc. add them all in a nice vector and do one grand call to the OpenGL driver,
you will get speed.
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,223
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
Yeah, i did some very simple batching here - i just made a list of all tiles per "sprite" and then did a few call for each sprite. This works even with OpenGL 1.1.

If you are using a tilemap and GLSL you can even upload the entire tilemap as a texture together with all your textures as a 3D texture atlas, sample it from a shader and essentially draw everything (or at least the static parts) with a single quad.

Though TBH depending on what you are doing you can get away with the one-draw-one-call (or actually, one-draw-a-few-calls :-P) and still get a lot of performance with some very basic culling (essentially check if the sprite rectangle is inside the view rectangle). Something like Aquaria should be easy, for example.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
Depending on what exactly you are using these polygons for you may manage to get even faster results, but i wrote this in ~30 minutes so even with barely any effort you can have very fast results with OpenGL.

I was talking about a different kind of polygons, which have 1,000s of coordinates themselves (CAD). But this doesn't matter for this topic.

But if you don't believe me you can try to compile this in Qt and see which is faster, the OpenGL or the Qt rendering
https://doc.qt.io/qt-5/qtwidgets-graphicsview-chip-example.html
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
A very, very long term perspective note. I went for C++ during the 90s on the Amiga (well, actually it was C back then, but code portability wise it's almost C++). Thanks to this choice, I never really had to rewrite anything (well, except the engine of course). Actually, I know I'm using a tiny part of the code in Legends of Amberland (2019) which was written in 2001, that's almost 20 years old code that still works :D That's a nice advantage of using the industry standard language instead of the current most popular one.

C++ has lately been very nice to me. The main advantage is not the language but the deployment and user experience.

Compiling C++ applications is finicky and often a bit annoying, so I saved a lot of energy during development with eg C# (the C# IDE is really NICE) but end up paying for this 10fold in deployment. Every time a new version of dotNet comes up, a new update of Windows, some dll is missing from System32 or some Internationalization feature fucks with your program you will have to make nonsensical changes that ruins your weekend.

With Qt and C++ I am able to just deploy all dlls in 1 folder on a USB stick and it works all the time. Add to that the better performance and the fact that it never crashes.
 

vlzvl

Arcane
Developer
Joined
Aug 7, 2017
Messages
191
Location
Athens
If you are using a tilemap and GLSL you can even upload the entire tilemap as a texture together with all your textures as a 3D texture atlas, sample it from a shader and essentially draw everything (or at least the static parts) with a single quad.

I effectively do exactly that in my game. I am using about 1000 sprites (actually 1020); however, they're all tightly packed in 21 big textures, sized at most 2048x2048 to allow for maximum compatibility everywhere. And that without counting my true type fonts which are as well using the same tiling mechanism to generate & draw, except on-the-fly. Changing fewer textures per-frame basis or better, along with some back-to-front mechanism leads to constant 60 fps, assuming the OpenGL driver supports VSync or you implemented some sleep mechanism :)
 
Joined
May 19, 2018
Messages
415
Yes you should learn C++. I used to do a lot of stuff (DotNET) but since I discovered Qt some years ago I'm doing most of my work with C++ now, and will probably never go back.

I also plan to do at least a short experimentation with Unreal 4. If you start a thread about Unreal 4 I might join in at some point. We had one about Qt and it helped me greatly in the beginning,

sorry for the stupid question, but what is qT?
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
https://www.qt.io/

Qt is a cross platform toolkit for graphical applications.

If you want to write professional C++ applications with minimal effort, there's probably nothing better.

You could easily develop 2D or strategy games with it too, but the pro license is quite expensive, I believe 3,500$ a year.
 

The Avatar

Pseudodragon Studios
Developer
Joined
Jan 15, 2016
Messages
336
Location
The United States of America
b) Unity allows you to script in C#, but in my experience it is more of a tool aimed at creating prototypes than commercial products. I also find the idea of paying a subscription to use a piece of software abhorrent.

There are plenty of commercial games made using Unity. Maybe not AAA games, but indie and mobile games, and stuff like Pillars of Eternity and Pathfinder:Kingmaker. Also, Unity is free and you don't need to pay for a sub unless you're making bank on your game.
 

J1M

Arcane
Joined
May 14, 2008
Messages
14,616
b) Unity allows you to script in C#, but in my experience it is more of a tool aimed at creating prototypes than commercial products. I also find the idea of paying a subscription to use a piece of software abhorrent.

There are plenty of commercial games made using Unity. Maybe not AAA games, but indie and mobile games, and stuff like Pillars of Eternity and Pathfinder:Kingmaker. Also, Unity is free and you don't need to pay for a sub unless you're making bank on your game.
Or if you want to use a different color theme. Or port. Or launch without their logo.

I am aware that hundreds of Unity games have shipped. I was speaking from my personal experience. It is pretty easy to slap some things in a scene and apply some default physics to them. That feels like progress, but if you want to create something of finished quality (which requires robust pipelines to iterate efficiently) I found myself fighting the engine. Probably the best example of this is how it took Blizzard an extra year to get Hearthstone on Android after already having done the work to put that Unity game on iPhone.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
Scripting in Unity seems to be like a game that allows you to change every aspect, as long as you are able to, but not a real programming tool. At least many people use it like that, that's why so many games look the same.

I never got into it tutorials because the concept looked so restrictive and lame, and when I saw how crap the games run I never tried again. For example Kerbal Space program, when the scene had loaded it still took several seconds until the game accepted mouse and keyboard input.

Also with any tool this type where most functionality is already inbuilt into the objects, you end up spending an awful lot of time just with trial and error because you don't know when something will actually work. This totally sucks, just like working with HTML, CSS, XAML and such stuff.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom