Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Which programming language did you choose and why?

Joined
Dec 17, 2013
Messages
5,110
What's wrong with OOP? I keep hearing complaints but so far I have no seen a concise, reasonable argument
It does not fit our hardware architecture.

In OO program to call a method you need to go to VMT associated with specific instance of an object - then read address of virtual method - jump to it - and only then execute it. You cannot inline the method.

You cannot predict size of polymorphic object - so you cannot place them in memory in order. Your collection of objects is really a collection of pointers to various different addresses - and next few objects in collection cannot be loaded into cache ahead of time while you are processing current one.

It is not something that will go away with larger caches. Ryzen 5800x3d or Zen 4/5 offer larger and larger higher level caches - but you can always get faster code by fitting everything in lower level cache - which is 10 times smaller and 10 times faster.

Most commercially used so called OO languages do not go all the way - and do not make everything inherit from an 'object'. That's why sometimes you get programs that perform more or less acceptable.

This is the kind of argument that makes no sense in the context of most programming jobs, e.g. business programming, web programming, etc. This guy is arguing for low level programming that fits the machine, when literally the entire history of programming has been in the opposite direction. All programming languages are divided into several generations: 1st gen were 0s and 1s, Assembly was 2nd gen, languages like C and Fortran and Cobol were 3rd gen, and SQL is 4th gen. Java/Python/Ruby/C# are something like 3.5 gen, though Python and Ruby are much higher on that scale and Java/C# lower. Each generation was more of an abstraction to get away from the machine details. The more you get away from thinking in terms of machine architecture, the more freedom, flexibility, and power you get to think in terms of your business logic, and that increases programmer productivity by a ridiculous amount. And programmer productivity is the single most expensive thing today when it comes to this industry.

Now if you are coding an OS or a cutting edge 3D shooter, you might need to do what he is talking about, but outside of that, focus on being more productive. Notch built Minecraft in Java, a language generally considered terrible for video games, but because he was familiar with it and it allowed him to focus on the design stuff, the game sold for 2.5 billion bucks. Meanwhile you have pointer cowboys like some here trying to optimize memory allocation...

As far as OOP, anyone arguing against it is a retard. OOP is an extremely valuable programming tool. Like any tool, it can be over-used. The idea behind OOP is to divide-and-conquer the complexity of code into manageable bite-sized conceptual chunks which can be more easily created, maintained, re-used, etc. You should always do that. However, if you are writing a one-time unique procedure, of course don't put it into a class just to follow OOP, but where appropriate, try to use OOP as much as possible.
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,431
Location
down under
Codex+ Now Streaming!
Speaking from my own experience doing OOP, I think the danger is that it becomes very easy to focus on paradigm itself and not the problem at hand. It is either trying to lock-in some upfront unchangable object pattern for a problem you don't fully understand yet; or simply churning out new classes without any thought because classes are good, right? All of a sudden the problem at hand becomes, oh I have this class, but I need state in this other class, but now I need a reference to that class. Oh my program stopped running because this object modified that object and I didn't expect that when dealing with it in my object. Or just hours and hours of, okay this class inherits from this class and that class inherits from that class, and so forth. These problems seem to go exponential as more people work on the same codebase.

That's exactly it. If you haven't got at least a mild aversion towards OOP and "design patterns" in general, you haven't really used it in anger, or in large projects, or in codebases that change a lot. You haven't seen "spaghetti OOP" yet written by real teams in the real world if you think it's a good idea, I'm serious.

As far as OOP, anyone arguing against it is a retard. OOP is an extremely valuable programming tool.

The FP crowd would like to have a word with you.

In a nutshell, OOP doesn't compose well, and the "extend" keyword is cancer. Pure _interface_ inheritance can be handy, but not as a means of factoring out common code, or god forbid changing behaviour, that's absolutely terrible. Note I'm not even considering low-level stuff here, just viewing it as a purely high-level tool for organising programs, and OOP isn't that great in that regard. It's also true it's a much better fit for some problem domains than others (GUIs and simulations being the prime examples where it's a sorta good fit).

Then if you really care about performance, just forget about it, like other people in the know here have already explained (hint: read up on arrays-of-structs vs structs-of-arrays, etc.)
 

J1M

Arcane
Joined
May 14, 2008
Messages
14,616
I would like to see a concrete example from the OOP proponents that would not be better-solved and more efficiently maintained with a compositional or functional approach.
 

Axioms

Arcane
Developer
Joined
Jul 11, 2019
Messages
1,486
I would like to see a concrete example from the OOP proponents that would not be better-solved and more efficiently maintained with a compositional or functional approach.
What? Composition is part of OOP. In fact modern OOP almost only uses composition over inheritance. Especially if you are talking about anything besides Java. And who cares about Java these days.
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,431
Location
down under
Codex+ Now Streaming!
Hieronymus Bosch “A visual guide to the Scala language” oil on oak panels, 1490-1510

The left panel shows the functional features, the main one describes the type system, and the right the object oriented parts
(source)

That's pretty accurate and reflects my actual experience with the language!


lHYnzrL.png
 

Azdul

Magister
Joined
Nov 3, 2011
Messages
3,330
Location
Langley, Virginia
What's wrong with OOP? I keep hearing complaints but so far I have no seen a concise, reasonable argument
It does not fit our hardware architecture.

(...)

This is the kind of argument that makes no sense in the context of most programming jobs, e.g. business programming, web programming, etc. This guy is arguing for low level programming that fits the machine, when literally the entire history of programming has been in the opposite direction. All programming languages are divided into several generations: 1st gen were 0s and 1s, Assembly was 2nd gen, languages like C and Fortran and Cobol were 3rd gen, and SQL is 4th gen. Java/Python/Ruby/C# are something like 3.5 gen, though Python and Ruby are much higher on that scale and Java/C# lower. Each generation was more of an abstraction to get away from the machine details.
It is not for low level programming. Level of abstraction in efficient data-driven or template algorithms is much higher than in typical business / web programming. Amazon programming lectures by Alex Stepanov - available for free on Youtube explain that in great (sometimes excessive) detail.

SQL is not OO language - and is much older than Java/Python/Ruby/C#. It is still used today because it allows you to express WHAT you want to achieve - not HOW - so it gives optimizers a chance to do their job correctly ...
Now if you are coding an OS or a cutting edge 3D shooter, you might need to do what he is talking about, but outside of that, focus on being more productive. Notch built Minecraft in Java, a language generally considered terrible for video games, but because he was familiar with it and it allowed him to focus on the design stuff, the game sold for 2.5 billion bucks. Meanwhile you have pointer cowboys like some here trying to optimize memory allocation...

OOP is an extremely valuable programming tool.
Despite what Dijkstra said - that OOP is the idea so stupid could only be invented in California - OOP is valuable.

When you're working on prototype, proof-of-concept - at some startup company - or on your own - I would even say that programming language choice is not that important. Good code can be written in any language - just some languages actively fight against it more than others.

But as soon as whatever you've written is successful - then all terrible design choices (OOP related or not) become apparent and need to be corrected - because you either don't want to drain batteries - or you don't want to pay thousands of dollars every month for AWS / Azure / GCP / electricity.
 
Last edited:

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,431
Location
down under
Codex+ Now Streaming!
When you're working on prototype, proof-of-concept - at some startup company - or on your own - I would even say that programming language choice is not that important. Good code can be written in any language - just some languages actively fight against it more than others.
:salute:Absolutely, and most of us are talking from that perspective here, which is lost on some. E.g. I still use Python sometimes, it's okay for certain tasks, but saying it's the best language ever or that it's a good fit for most programming tasks is absolutely ridiculous.

But as soon as whatever you've written is successful - then all terrible design choices (OOP related or not) become apparent and needs to be corrected - because it either don't want it to drain batteries - or you don't want to pay thousands of dollars every month for AWS / Azure / GCP / electricity.

Only thing I can add to this: yes.
 

Krice

Arcane
Developer
Joined
May 29, 2010
Messages
1,291
Modern C++ argues that you almost never want inheritance. All composition all the time.
This is another meme that doesn't seem to go away. If inheritance is logical then it's the thing you should do. I'm doing it in my projects and it just works and it's better than composition in that case.
 

NoMoneyNoFameNoDame

Artist Formerly Known as Prosper
Patron
Joined
Feb 22, 2022
Messages
923
Modern C++ argues that you almost never want inheritance. All composition all the time.
This is another meme that doesn't seem to go away. If inheritance is logical then it's the thing you should do. I'm doing it in my projects and it just works and it's better than composition in that case.
IF the goal is a functioning prototype, then the fastest path to it that will hit the checklist is most important.
If fastest path means less good code, then you're probably just a bad programmer.

IF the goal is a well designed robust for all purposes system, then you must take a much more paced approach.

In any case the computing language you need is pencil and paper first.

There's no cheezing reality. Is what you want to make using tools you have, or tools you need?

Also fuck Unreal engine. Who can understand that shit? How fucking lobotmized do you have to be
to pretend there's something to be gained by using Unreal? You have to be delusional to think you will get good graphics cuz Unreal!

Game engines make me ashamed to know C++. Is Godot any better? I have no clue. Last I looked it's a bunch of shit i'll never bother to learn.

What even is the point of using other people's shit? Engines do you favors in terms of quantity, not quality.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay
Modern C++ argues that you almost never want inheritance. All composition all the time. Obviously Java-style architecture astronauts are less programmers than misguided mystics.

As to inheritance less is more. In C++ I hardly use it at all, except when there is a problem that requires it.

Of course there are such problems, like anything that heavily requires polymorphism. And you are of course always deriving widget or library classes because when you are using a huge block of functiunality, inheritance is the only healthy way to adapt it to your needs without messing within that block. That is the only really nice part about inheritance, that you can take something great, change it, but not mess it up for everyone else.

It is all no problem if you have done it for a couple of years. But the warning to new programmers is that typical OO examples taught in first semester (monkey is mammal is animal) are a road directly into madness, and strict Java / C# will become huge liabilities. static and namespaces are the only weapon you have to produce a somewhat logical program structure. And if you avoid inventing and deriving classes on top of classes for every minor thing, you will notice longer and longer casts are completely gone from your code, routine changes are relatively easy because the complexity of the inheritance hierarchy is gone and performance of the applications is much better.
 

Burning Bridges

Enviado de meu SM-G3502T usando Tapatalk
Joined
Apr 21, 2006
Messages
27,562
Location
Tampon Bay

In any case the computing language you need is pencil and paper first.
Is this an AI generated message?

I assume what he means is that a good programmer has the strtucture of his program in his head before he begins coding, and this is true. But if you are a relative beginner you have no other choice but to experiment. So his advice is useless, because it is only available for people who don't need it.
 

Axioms

Arcane
Developer
Joined
Jul 11, 2019
Messages
1,486
Modern C++ argues that you almost never want inheritance. All composition all the time. Obviously Java-style architecture astronauts are less programmers than misguided mystics.

As to inheritance less is more. In C++ I hardly use it at all, except when there is a problem that requires it.

Of course there are such problems, like anything that heavily requires polymorphism. And you are of course always deriving widget or library classes because when you are using a huge block of functiunality, inheritance is the only healthy way to adapt it to your needs without messing within that block. That is the only really nice part about inheritance, that you can take something great, change it, but not mess it up for everyone else.

It is all no problem if you have done it for a couple of years. But the warning to new programmers is that typical OO examples taught in first semester (monkey is mammal is animal) are a road directly into madness, and strict Java / C# will become huge liabilities. static and namespaces are the only weapon you have to produce a somewhat logical program structure. And if you avoid inventing and deriving classes on top of classes for every minor thing, you will notice longer and longer casts are completely gone from your code, routine changes are relatively easy because the complexity of the inheritance hierarchy is gone and performance of the applications is much better.
I don't use any inheritance or casts and I only miss it for one or two things. I think having to be slightly more complex in the two areas where it matters is worth avoiding inheritance.
 

Krice

Arcane
Developer
Joined
May 29, 2010
Messages
1,291
I don't use any inheritance
If you have a large program without inheritance there is going to be code duplication for no good reason. If you try to avoid inheritance at all costs why not just code in some other paradigm, like procedural? It's not wrong to use procedural paradigm. By not using inheritance you are missing a big part why OOP is so great compared to many other styles.
 

Axioms

Arcane
Developer
Joined
Jul 11, 2019
Messages
1,486
I don't use any inheritance
If you have a large program without inheritance there is going to be code duplication for no good reason. If you try to avoid inheritance at all costs why not just code in some other paradigm, like procedural? It's not wrong to use procedural paradigm. By not using inheritance you are missing a big part why OOP is so great compared to many other styles.
Don't feel like I'm missing much. Composition does almost anything inheritance does with a minimal penalty.
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,431
Location
down under
Codex+ Now Streaming!
If you have a large program without inheritance there is going to be code duplication for no good reason.
Inheritance as a tool for code reuse = bad
Composition as a tool for code reuse = good
Reason (one of many): inheritance doesn't compose
 

pOcHa

Arcane
Patron
Joined
Jul 12, 2015
Messages
2,855
Make the Codex Great Again! Grab the Codex by the pussy Insert Title Here RPG Wokedex Strap Yourselves In Codex Year of the Donut Steve gets a Kidney but I don't even get a tag.
OOP is used for maintainability and testing (mocking\faking), is not productive at the start of the project but saves a lot of time later on (20\80 rule), and who cares about performance as man hours are much more expensive than hardware anyway (for a long time now, at least ever since cloud)
 
Joined
Dec 17, 2013
Messages
5,110
There are three types of programmers along one axis:

1. Highly productive programmers that churn out functional code at high volume, interact with business people, try to understand the business requirements and help the company grow.

2. Autistic uber-nerds that are mostly interested in the technical aspects of programming, are unable to interact with business people on any productive level, will re-factor and redesign perfectly functioning systems 12 times to get that perfect architecture, etc.

3. Outsourced/shit programmers that care neither about point 1 or 2.

I feel like based on the comments in this thread that most programmers here are from group 2... They despise highly productive languages, frown on paradigms that most closely resemble the real world, focus on insane performance in the age of cheap cloud-based hardware, and so on. Interesting...
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,431
Location
down under
Codex+ Now Streaming!
There are three types of programmers along one axis:

1. Highly productive programmers that churn out functional code at high volume, interact with business people, try to understand the business requirements and help the company grow.

2. Autistic uber-nerds that are mostly interested in the technical aspects of programming, are unable to interact with business people on any productive level, will re-factor and redesign perfectly functioning systems 12 times to get that perfect architecture, etc.

3. Outsourced/shit programmers that care neither about point 1 or 2.

I feel like based on the comments in this thread that most programmers here are from group 2... They despise highly productive languages, frown on paradigms that most closely resemble the real world, focus on insane performance in the age of cheap cloud-based hardware, and so on. Interesting...

And there's Porky who's so entrenched in writing webapps in Python where performance doesn't matter that in his worldview that's the only type of project that exists.

That's a retarded categorisation, by the way; if you're doing heavy algorithmic stuff that can go on for weeks or months, you don't have time to sit in meetings half of the day to "interact with business people, try to understand the business requirements and help the company grow". You do all that by *coding*, because that's your job. Better companies recognise that forcing highly-skilled coders sit in meetings all day is gross waste of talent instead of letting them do what they does best, which is coding.

E.g. in my job I'm *shielded* from wasting my time by too many meetings by management, because talking about business requirements can be done by many people (and there are dedicated people to do that), but doing the actual work requires uninterrupted focus. Forcing coders to do all that in one person is actually a sign of Mickey-mouse type of operations to me where they simply just want to hire one person "who can do it all". Feel sorry for you if that's your only type of experience working in the industry.

Also, the business people won't understand shit of what you will try to say, because it's so math/algorithm heavy. So you can only give a two sentence progress report anyway, and that's it.

I don't know where you live, Porky, but there are *many* "outsourced programmers" from the Eastern Block in particular who are at least as good if not better than the average "western" guy. Sure, I have worked on projects where half of the team was outsourced to Indonesia and it was a trainwreck... but it's in general a gross overgeneralisation.
 

Orud

Scholar
Patron
Joined
May 2, 2021
Messages
1,113
Strap Yourselves In Codex Year of the Donut Codex+ Now Streaming!
Modern C++ argues that you almost never want inheritance. All composition all the time.
This is another meme that doesn't seem to go away. If inheritance is logical then it's the thing you should do. I'm doing it in my projects and it just works and it's better than composition in that case.
Inheritance can be useful, e.g. templates, but you can easily write unreadable spaghetti code by using it. If you need to break the Hollywood principle, "I'll call you (parent calls child) don't call me (child calls parent)", inheritance is something I'd avoid.

I would like to see a concrete example from the OOP proponents that would not be better-solved and more efficiently maintained with a compositional or functional approach.
A pure functional approach is useable for small projects, but that's it. You're really attaching a ball and chain to yourself if you try it for larger projects.

As with most things, the truth is found somewhere in the middle. Combining both functional and OOP is often the best way to go, a dogmatic approach to anything is bad.
 
Last edited:

J1M

Arcane
Joined
May 14, 2008
Messages
14,616
Modern C++ argues that you almost never want inheritance. All composition all the time.
This is another meme that doesn't seem to go away. If inheritance is logical then it's the thing you should do. I'm doing it in my projects and it just works and it's better than composition in that case.
Inheritance can be useful, e.g. templates, but you can easily write unreadable spaghetti code by using it. If you need to break the Hollywood principle, "I'll call you (parent calls child) don't call me (child calls parent)", inheritance is something I'd avoid.

I would like to see a concrete example from the OOP proponents that would not be better-solved and more efficiently maintained with a compositional or functional approach.
A pure functional approach is useable for small projects, but that's it. You're really attaching a ball and chain to yourself if you try it for larger projects.

As with most things, the truth is found somewhere in the middle. Combining both functional and OOP is often the best way to go, a dogmatic approach to anything is bad.
I never said purely functional. Pretending that input and output don't exist is nonsense.
 

kepler

Novice
Joined
Jun 1, 2022
Messages
43
Location
Lechistan
What's wrong with OOP? I keep hearing complaints but so far I have no seen a concise, reasonable argument
It does not fit our hardware architecture.

In OO program to call a method you need to go to VMT associated with specific instance of an object - then read address of virtual method - jump to it - and only then execute it. You cannot inline the method.

You cannot predict size of polymorphic object - so you cannot place them in memory in order. Your collection of objects is really a collection of pointers to various different addresses - and next few objects in collection cannot be loaded into cache ahead of time while you are processing current one.

It is not something that will go away with larger caches. Ryzen 5800x3d or Zen 4/5 offer larger and larger higher level caches - but you can always get faster code by fitting everything in lower level cache - which is 10 times smaller and 10 times faster.

Most commercially used so called OO languages do not go all the way - and do not make everything inherit from an 'object'. That's why sometimes you get programs that perform more or less acceptable.

This is the kind of argument that makes no sense in the context of most programming jobs, e.g. business programming, web programming, etc. This guy is arguing for low level programming that fits the machine, when literally the entire history of programming has been in the opposite direction. All programming languages are divided into several generations: 1st gen were 0s and 1s, Assembly was 2nd gen, languages like C and Fortran and Cobol were 3rd gen, and SQL is 4th gen. Java/Python/Ruby/C# are something like 3.5 gen, though Python and Ruby are much higher on that scale and Java/C# lower. Each generation was more of an abstraction to get away from the machine details.

This is because thanks to Moore's Law you could just eat the performance. Days of doubling processing power every 18 month are over but we still want more from our software so further abstracting from the machine (or not considering the performance) is probably not possible.

The more you get away from thinking in terms of machine architecture, the more freedom, flexibility, and power you get to think in terms of your business logic, and that increases programmer productivity by a ridiculous amount. And programmer productivity is the single most expensive thing today when it comes to this industry.

1. Highly productive programmers that churn out functional code at high volume, interact with business people, try to understand the business requirements and help the company grow.

You are forgetting about end user and what his demands are.
 

Axioms

Arcane
Developer
Joined
Jul 11, 2019
Messages
1,486
There are three types of programmers along one axis:

1. Highly productive programmers that churn out functional code at high volume, interact with business people, try to understand the business requirements and help the company grow.

2. Autistic uber-nerds that are mostly interested in the technical aspects of programming, are unable to interact with business people on any productive level, will re-factor and redesign perfectly functioning systems 12 times to get that perfect architecture, etc.

3. Outsourced/shit programmers that care neither about point 1 or 2.

I feel like based on the comments in this thread that most programmers here are from group 2... They despise highly productive languages, frown on paradigms that most closely resemble the real world, focus on insane performance in the age of cheap cloud-based hardware, and so on. Interesting...

And there's Porky who's so entrenched in writing webapps in Python where performance doesn't matter that in his worldview that's the only type of project that exists.

That's a retarded categorisation, by the way; if you're doing heavy algorithmic stuff that can go on for weeks or months, you don't have time to sit in meetings half of the day to "interact with business people, try to understand the business requirements and help the company grow". You do all that by *coding*, because that's your job. Better companies recognise that forcing highly-skilled coders sit in meetings all day is gross waste of talent instead of letting them do what they does best, which is coding.

E.g. in my job I'm *shielded* from wasting my time by too many meetings by management, because talking about business requirements can be done by many people (and there are dedicated people to do that), but doing the actual work requires uninterrupted focus. Forcing coders to do all that in one person is actually a sign of Mickey-mouse type of operations to me where they simply just want to hire one person "who can do it all". Feel sorry for you if that's your only type of experience working in the industry.

Also, the business people won't understand shit of what you will try to say, because it's so math/algorithm heavy. So you can only give a two sentence progress report anyway, and that's it.

I don't know where you live, Porky, but there are *many* "outsourced programmers" from the Eastern Block in particular who are at least as good if not better than the average "western" guy. Sure, I have worked on projects where half of the team was outsourced to Indonesia and it was a trainwreck... but it's in general a gross overgeneralisation.
American outsourcing goes straight to India or south east Asia. They don't trust eastern europeans here.
 

Axioms

Arcane
Developer
Joined
Jul 11, 2019
Messages
1,486
What's wrong with OOP? I keep hearing complaints but so far I have no seen a concise, reasonable argument
It does not fit our hardware architecture.

In OO program to call a method you need to go to VMT associated with specific instance of an object - then read address of virtual method - jump to it - and only then execute it. You cannot inline the method.

You cannot predict size of polymorphic object - so you cannot place them in memory in order. Your collection of objects is really a collection of pointers to various different addresses - and next few objects in collection cannot be loaded into cache ahead of time while you are processing current one.

It is not something that will go away with larger caches. Ryzen 5800x3d or Zen 4/5 offer larger and larger higher level caches - but you can always get faster code by fitting everything in lower level cache - which is 10 times smaller and 10 times faster.

Most commercially used so called OO languages do not go all the way - and do not make everything inherit from an 'object'. That's why sometimes you get programs that perform more or less acceptable.

This is the kind of argument that makes no sense in the context of most programming jobs, e.g. business programming, web programming, etc. This guy is arguing for low level programming that fits the machine, when literally the entire history of programming has been in the opposite direction. All programming languages are divided into several generations: 1st gen were 0s and 1s, Assembly was 2nd gen, languages like C and Fortran and Cobol were 3rd gen, and SQL is 4th gen. Java/Python/Ruby/C# are something like 3.5 gen, though Python and Ruby are much higher on that scale and Java/C# lower. Each generation was more of an abstraction to get away from the machine details.

This is because thanks to Moore's Law you could just eat the performance. Days of doubling processing power every 18 month are over but we still want more from our software so further abstracting from the machine (or not considering the performance) is probably not possible.

The more you get away from thinking in terms of machine architecture, the more freedom, flexibility, and power you get to think in terms of your business logic, and that increases programmer productivity by a ridiculous amount. And programmer productivity is the single most expensive thing today when it comes to this industry.

1. Highly productive programmers that churn out functional code at high volume, interact with business people, try to understand the business requirements and help the company grow.

You are forgetting about end user and what his demands are.
Porky is absolutely right in many cases. But incredibly wrong for others. Try coding a detail strategy game in some high level language. Hell we see the performance hits in C# in Unity much less Python.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom