Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Which programming language did you choose and why?

J1M

Arcane
Joined
May 14, 2008
Messages
14,629
The youngest and oldest software devs tend to have something in common: they are only good for expanding the breadth of established functionality. And it takes them longer than anyone would expect. :lol:
 

Rincewind

Magister
Patron
Joined
Feb 8, 2020
Messages
2,471
Location
down under
Codex+ Now Streaming!
Opposite experience there. "Anybody" can do the bootstrapping of most projects.
Making it polished, correct, maintainable yet optimized and shiny is where experienced engineers shine the most.
Yeah, now that you put it that way, that's true too. I guess I was approaching it more from the enjoyment angle; I definitely enjoy the prototyping/sketching out ideas more, but maybe I just have a personal problem with finishing things to completion...
 

gaussgunner

Arcane
Joined
Jul 22, 2015
Messages
6,158
Location
ХУДШИЕ США
I don't think typing is the same as what I'd like to do, but I'm not overly familiar with the specific meaning of CS terms
Well, CS is just a collection of techniques people have invented to make programming a little less unmanageable. It's a pretty infantile science. Most of it is cargo cult faggotry (like Lisp) dressed up as "science" (pretty much like climate science, modern physics, epidemiology, political science, economics, psychiatry...)

That said, the study of algorithms is grounded in math and engineering. It's practical knowledge you can use.

Anyway, the purpose of static typing is twofold: to catch dumb mistakes ahead of time (in compilers, 'linters', or editors) and to provide information needed for compiler optimization. It's confusing and tedious for inexperienced programmers, so dynamically typed "scripting languages" (Basic, Lisp, Perl, Python, Ruby, PHP, Javascript) became popular. However, in the last 10-20 years statically typed languages (like C# and even C++) have added type inference so you can simply declare variables as "auto foobar = ..." when the data type of the right-hand side is obvious. It's not always obvious and you still have to understand the type system, so modern C++ with type inference is only slightly less of a pain in the ass. But you might enjoy a more intelligently designed language like C# or whatever the cool kids are using now.

I've been through all that and I still like C. The type system isn't perfect but NONE OF THEM ARE, and it's a much simpler language, it's easier to reason about, and it still catches most dumb mistakes and produces very fast executable code. And I also have scripting languages for quick and dirty work.
 

Tramboi

Prophet
Patron
Joined
May 4, 2009
Messages
1,226
Location
Paris by night
Opposite experience there. "Anybody" can do the bootstrapping of most projects.
Making it polished, correct, maintainable yet optimized and shiny is where experienced engineers shine the most.
Yeah, now that you put it that way, that's true too. I guess I was approaching it more from the enjoyment angle; I definitely enjoy the prototyping/sketching out ideas more, but maybe I just have a personal problem with finishing things to completion...
We all have, that's why we're getting paid :D
 
Joined
Jan 5, 2021
Messages
413
This doesn't really have to do with interfaces so much. Interfaces are about creating highly generic code for frameworks and such, so third party users can implement their own classes while still using your framework. Code readability, on the other hand, is much more affected by the quality of your code:

1. Writing clean, logical code.
2. Adding relevant comments in key places.
3. Splitting up code/data into manageable pieces (whether classes, methods, etc). OOP is great for this, btw.
4. Using descriptive names for variables/methods.
5. Avoiding "cute" shit and keeping code straightforward, e.g. writing very complex code statements using some of the language's tricks, or things like lambdas might seem leet in the moment, but it really fucks with the code's readability.

If you do the stuff above, your code should be fairly easy to understand.

You're on the money about interfaces being more for communicating with API's etc. If you're not making a library and your codebase is full of interfaces, you have a problem.

One piece of advice I like to give out to novice programmers is:

INTERFACES DO NOT EXIST

I mean yeah C#, Java and a few other languages implement "interfaces", but they are just less useful base classes.

C++ doesn't have (or need) interfaces because it has multiple inheritence.

There is no special concept of "interfaces". They aren't things you use with a specific name and a specific purpose. There is no such thing as a "contract" within programming (or at least, not as a distinct feature, just a renaming of polymorphism). Interfaces exist to allow polymorphism, that's it. Classes already do polymorphism and they do it much better, since in many cases implementing an interface means re-writing a similar-but-slightly-different version of a function in every class that implements it, rather than having proper functionality in a base class (or, even better, a delegate object, which is usually a much better fit for the job than inheritance). I have literally seen cases where people have rewritten the exact same function verbatim in multiple classes that extend the same interface. Thinking about everything as a contract hugely encourages developers to try and define everything as an interface for every little concept in their codebase, and they end up with huge amounts of interface bloat.

Most people see interfaces as "the" correct way to handle conformity to a certain design, even though (in my opinion) abstract base classes work just as well if not better, since they can easily handle default cases (without needing things like the null object pattern).

I largely agree with the idea behind interfaces - you should always split up your functionality into separated functions, classes and modules, and in order to make inter-class communication easier you need polymorphism, which interfaces utilise. But this is true with or without interfaces. Whether I inherit from a base class or implement an interface, logically it makes no difference because there is no fundamental difference. In both cases I can make a function take the base type and pass in any child of that type, whether it inherits from it or "implements" it.

Many books have been written on the correct time to use "interfaces" vs base classes. Whole stack overflow threads have arisen around the "logic" or "philosophy" of interfaces vs classes, and it's entirely bullshit. There is no difference. You should always use a base class if you have that option because you never know if you will need default functionality later. I find this crops up a LOT more often than the occasional time where you need to inherit from a base class but are already using the one available to languages like C# or Java. Now that C# has implemented default behaviour in interfaces, they are literally identical to base classes, which is stupid and shows just how much of a meme interfaces are.

The biggest lie in the software industry is the lie created by Java that there are these magical things called "interfaces" which are somehow different to good old fashioned polymorphism. Actually the biggest lie is that Python is a usable language, but it's one of the bigger lies.

Will most abstract base classes in C++ have next to no functionality? You bet. And they will look exactly like your typical C# or Java interface. Because interfaces are just that concept, given a name. The difference is, my abstract "vehicle" class (which Car and Truck both extend) makes real-world sense and is a simple structure. Thinking about it as contracts will inevitably lead some developers to instead make IDriveable, ISteerable, IPowerWindowsController etc interfaces, because they are thinking about how their vehicles will interact with their program, rather than as actual objects with a purpose. Even better, I can actually add default functionality to these classes and extend "backwards" as well as forwards, because there is absolutely functionality that all vehicles will need, and if it's added to the base class, they ALL get it automatically. Furthermore, I can make my vehicle class inherit from other base classes and gain even more default functionality (such as a "Movable" class with basic functions for setting positions and velocities, etc), something I cannot do with interfaces (which can only inherit off each other and can add no functionality, only contract). Interface thinking encourages programmers to make a whole bunch of really small pseudo-classes in a flat structure, which seems like a really good idea at the time until you realise that the entire point of inheritance is to inherit and overwrite functionality. "contract" thinking instead encourages you to slap on additional responsibilities to existing classes by making them inherit another interface, so they will work with some function that expects an interface of that type. This encourages god classes and overall messy design.

Please stop talking about interfaces like they are an actual thing. They are not. People who frame everything in terms of "communicating interfaces" and "contracts" and "roles" rather than a simple (mostly a simple tree) structure of objects frequently end up with lots of extra interface bloat, and don't benefit much from it. Interfaces only exist because language designers are too lazy/incompetent to solve the multiple inheritance problem (despite C++ solving it decades ago). If your language forces you to use interfaces in certain cases, by all means use them, but the only time you should be actively creating interfaces is when you NEED a second base class (which should be rare since you should be preferencing composition over inheritance anyway), and then you should use the "extract interface" feature of your editor. In all other cases they are pointless.

In most cases, your best bet is to create simple classes with 1 responsibility each, and the minimum amount of inheritance possible since inheritance is an inherently dangerous operation - it's the strongest coupling a class can have, after all - and avoiding it at all costs is a worthwhile endeavour. Instead, make classes themselves simple, delegate other responsibilities to other classes, and pass in any you need through the constructor. No interfaces necessary. Interfaces should be used rarely, and when necessary, don't center your design around them (or complex inheritance systems in general).

I guess I don't really hate interfaces as a language feature, I understand they exist to overcome a limitation in the languages design, I mainly hate the way they have become this sort of "new idea" and "design approach" around making everything a contract and defining all your objects as a set of functionalities rather than as discrete objects with specific responsibilities. Multiple Inheritance, while possible in C++, is rarely used because there's just not that many uses for it. Meanwhile in C# land I see classes frequently sporting 4 or 5 interfaces because the developers decided that some class (usually a god class like "Player") has to fulfill a bunch of different contracts rather than splitting up their design properly into discrete parts.

"Interface" as a concept IS a real thing and is important. If I have written an API or library for other developers to use, and I need to make a change, I need to carefully consider the interface I am presenting, because a change to a function signature or class name can break projects. This is not to be confused with "Interfaces" as a language feature.

ALL THAT SAID.

I mostly agree with all your points. You should split your design up into neat, usable interfa-*cough*-objects, each as simple as possible, and pass them around constantly using some sort of messaging or communication system and polymorphism. You don't need fancy language tools, you don't need "design patterns" like service locator (which is actually an anti-pattern, don't use it) or singletons (anyone who uses a singleton for any reason needs to be fired immediately and barred from the software industry). The biggest mistake most programmers make is sticking to "known good designs" rather than just using the simplest thing that works, and it ends up bloating their code and making it impossible to use or extend in any other sort of paradigm. What should be as simple as passing one class to another through a constructor often becomes multiple factory classes, a fancy dependency injection system, and strange looking classes like CarClassGeneratorGenerator<T>, which you don't ever want. This happens because someone read somewhere that factories and DI are good tools (which they are, in moderation), and it turned their tiny usable system into an unmaintainable mess. Don't get me wrong, studying design patterns can be extremely useful when looking for techniques for solving certain complex problems easily. Factories make a lot of sense when you need to configure objects in complicated ways. Service-based systems make sense when you have to conform to corporate APIs or a rigid centralised database structure. But when people see their favourite design pattern as -the- correct way to program, problems arise. "Contract based" programming against interfaces is one such design pattern. Don't fall into it's trap. It's excellent in some cases - for example in Unity having an IShootable interface that components can implement so that a raycast can call an OnGetShot message on every IShootable component it hits is a great design. But don't buy into the meme of putting interfaces everywhere, a lot of the time they will just bloat your code. The C# standard library suffers from this bloat. What is the fundamental difference between an IEnumerable<> and a IReadOnlyList<>? I know the answer, you don't have to tell me, as they implement different functionality but it's a very complex heirarchy setup which is difficult to learn when starting the language, and in most everyday cases people will just cast to an IList<> and be done with it.

Also keep in mind: Comments should largely not be used, ever. In the best cases they explain an algorithm that should already be understandable. Usually people use a comment above a function saying "//this adds 2 numbers together" entirely because their function is called NumberCrunch(int input1, int input2) rather than Sum(int a, int b). In almost every case I see in my day job where someone has used comments to explain code, they would have better spent their time making their code more readable rather than using variable names like i and then having to explain what i is used for.

They are genuinely useful in cases where code must (by the very nature of the problem) be complex. Unless you're writing something like a fast inverse square root on a daily basis, these cases are extremely rare (ironically the Q_rsqrt function could have done with more actual comments and less "what the fuck" comments). The far more common really good use case for comments is metadata. If you have to do something in a certain way because of some other system, mention it (such as having to do something strange in order to work around a bug in a third-party API that someone reading just the code won't know about). If you need to conform to some third-party API, by all means add a link to it's documentation in a comment. Comments should be there to augment your code with extra useful information that is relevant to, but not contained within, the code. They are not there to explain the code, as the code is already designed to be a human-readable format for understanding the logic of the problem (the common misconception is that code is written in a way which is designed to be understandable to computers. That is false. Programming code is designed to be easy and efficient to understand for humans. We use a compiler to make it readable by computers). Using a comment to explain hard to understand, badly written code is like writing a book to explain what a previous, difficult to understand book actually meant. Just rewrite the first book!
 
Last edited:

Raghar

Arcane
Vatnik
Joined
Jul 16, 2009
Messages
22,693
Interfaces are really imporant in HW industry. USB-C port is a standard interface, which allows any standard connector to be plugged into it. (When the connector isn't manufactured by China by XDUO company, then you need a different cable with more connector like connector.)
 
Joined
Jan 5, 2021
Messages
413
Interfaces are really imporant in HW industry. USB-C port is a standard interface, which allows any standard connector to be plugged into it. (When the connector isn't manufactured by China by XDUO company, then you need a different cable with more connector like connector.)
Wrong type of interface :P
I'm talking about the language feature, not hardware specifications.
 
Joined
Dec 17, 2013
Messages
5,182
This doesn't really have to do with interfaces so much. Interfaces are about creating highly generic code for frameworks and such, so third party users can implement their own classes while still using your framework. Code readability, on the other hand, is much more affected by the quality of your code:

1. Writing clean, logical code.
2. Adding relevant comments in key places.
3. Splitting up code/data into manageable pieces (whether classes, methods, etc). OOP is great for this, btw.
4. Using descriptive names for variables/methods.
5. Avoiding "cute" shit and keeping code straightforward, e.g. writing very complex code statements using some of the language's tricks, or things like lambdas might seem leet in the moment, but it really fucks with the code's readability.

If you do the stuff above, your code should be fairly easy to understand.

You're on the money about interfaces being more for communicating with API's etc. If you're not making a library and your codebase is full of interfaces, you have a problem.

One piece of advice I like to give out to novice programmers is:

INTERFACES DO NOT EXIST

I mean yeah C#, Java and a few other languages implement "interfaces", but they are just less useful base classes.

C++ doesn't have (or need) interfaces because it has multiple inheritence.

There is no special concept of "interfaces". They aren't things you use with a specific name and a specific purpose. There is no such thing as a "contract" within programming (or at least, not as a distinct feature, just a renaming of polymorphism). Interfaces exist to allow polymorphism, that's it. Classes already do polymorphism and they do it much better, since in many cases implementing an interface means re-writing a similar-but-slightly-different version of a function in every class that implements it, rather than having proper functionality in a base class (or, even better, a delegate object, which is usually a much better fit for the job than inheritance). I have literally seen cases where people have rewritten the exact same function verbatim in multiple classes that extend the same interface. Thinking about everything as a contract hugely encourages developers to try and define everything as an interface for every little concept in their codebase, and they end up with huge amounts of interface bloat.

Most people see interfaces as "the" correct way to handle conformity to a certain design, even though (in my opinion) abstract base classes work just as well if not better, since they can easily handle default cases (without needing things like the null object pattern).

I largely agree with the idea behind interfaces - you should always split up your functionality into separated functions, classes and modules, and in order to make inter-class communication easier you need polymorphism, which interfaces utilise. But this is true with or without interfaces. Whether I inherit from a base class or implement an interface, logically it makes no difference because there is no fundamental difference. In both cases I can make a function take the base type and pass in any child of that type, whether it inherits from it or "implements" it.

Many books have been written on the correct time to use "interfaces" vs base classes. Whole stack overflow threads have arisen around the "logic" or "philosophy" of interfaces vs classes, and it's entirely bullshit. There is no difference. You should always use a base class if you have that option because you never know if you will need default functionality later. I find this crops up a LOT more often than the occasional time where you need to inherit from a base class but are already using the one available to languages like C# or Java. Now that C# has implemented default behaviour in interfaces, they are literally identical to base classes, which is stupid and shows just how much of a meme interfaces are.

The biggest lie in the software industry is the lie created by Java that there are these magical things called "interfaces" which are somehow different to good old fashioned polymorphism. Actually the biggest lie is that Python is a usable language, but it's one of the bigger lies.

Will most abstract base classes in C++ have next to no functionality? You bet. And they will look exactly like your typical C# or Java interface. Because interfaces are just that concept, given a name. The difference is, my abstract "vehicle" class (which Car and Truck both extend) makes real-world sense and is a simple structure. Thinking about it as contracts will inevitably lead some developers to instead make IDriveable, ISteerable, IPowerWindowsController etc interfaces, because they are thinking about how their vehicles will interact with their program, rather than as actual objects with a purpose. Even better, I can actually add default functionality to these classes and extend "backwards" as well as forwards, because there is absolutely functionality that all vehicles will need, and if it's added to the base class, they ALL get it automatically. Furthermore, I can make my vehicle class inherit from other base classes and gain even more default functionality (such as a "Movable" class with basic functions for setting positions and velocities, etc), something I cannot do with interfaces (which can only inherit off each other and can add no functionality, only contract). Interface thinking encourages programmers to make a whole bunch of really small pseudo-classes in a flat structure, which seems like a really good idea at the time until you realise that the entire point of inheritance is to inherit and overwrite functionality. "contract" thinking instead encourages you to slap on additional responsibilities to existing classes by making them inherit another interface, so they will work with some function that expects an interface of that type. This encourages god classes and overall messy design.

Please stop talking about interfaces like they are an actual thing. They are not. People who frame everything in terms of "communicating interfaces" and "contracts" and "roles" rather than a simple (mostly a simple tree) structure of objects frequently end up with lots of extra interface bloat, and don't benefit much from it. Interfaces only exist because language designers are too lazy/incompetent to solve the multiple inheritance problem (despite C++ solving it decades ago). If your language forces you to use interfaces in certain cases, by all means use them, but the only time you should be actively creating interfaces is when you NEED a second base class (which should be rare since you should be preferencing composition over inheritance anyway), and then you should use the "extract interface" feature of your editor. In all other cases they are pointless.

In most cases, your best bet is to create simple classes with 1 responsibility each, and the minimum amount of inheritance possible since inheritance is an inherently dangerous operation - it's the strongest coupling a class can have, after all - and avoiding it at all costs is a worthwhile endeavour. Instead, make classes themselves simple, delegate other responsibilities to other classes, and pass in any you need through the constructor. No interfaces necessary. Interfaces should be used rarely, and when necessary, don't center your design around them (or complex inheritance systems in general).

I guess I don't really hate interfaces as a language feature, I understand they exist to overcome a limitation in the languages design, I mainly hate the way they have become this sort of "new idea" and "design approach" around making everything a contract and defining all your objects as a set of functionalities rather than as discrete objects with specific responsibilities. Multiple Inheritance, while possible in C++, is rarely used because there's just not that many uses for it. Meanwhile in C# land I see classes frequently sporting 4 or 5 interfaces because the developers decided that some class (usually a god class like "Player") has to fulfill a bunch of different contracts rather than splitting up their design properly into discrete parts.

"Interface" as a concept IS a real thing and is important. If I have written an API or library for other developers to use, and I need to make a change, I need to carefully consider the interface I am presenting, because a change to a function signature or class name can break projects. This is not to be confused with "Interfaces" as a language feature.

ALL THAT SAID.

I mostly agree with all your points. You should split your design up into neat, usable interfa-*cough*-objects, each as simple as possible, and pass them around constantly using some sort of messaging or communication system and polymorphism. You don't need fancy language tools, you don't need "design patterns" like service locator (which is actually an anti-pattern, don't use it) or singletons (anyone who uses a singleton for any reason needs to be fired immediately and barred from the software industry). The biggest mistake most programmers make is sticking to "known good designs" rather than just using the simplest thing that works, and it ends up bloating their code and making it impossible to use or extend in any other sort of paradigm. What should be as simple as passing one class to another through a constructor often becomes multiple factory classes, a fancy dependency injection system, and strange looking classes like CarClassGeneratorGenerator<T>, which you don't ever want. This happens because someone read somewhere that factories and DI are good tools (which they are, in moderation), and it turned their tiny usable system into an unmaintainable mess. Don't get me wrong, studying design patterns can be extremely useful when looking for techniques for solving certain complex problems easily. Factories make a lot of sense when you need to configure objects in complicated ways. Service-based systems make sense when you have to conform to corporate APIs or a rigid centralised database structure. But when people see their favourite design pattern as -the- correct way to program, problems arise. "Contract based" programming against interfaces is one such design pattern. Don't fall into it's trap. It's excellent in some cases - for example in Unity having an IShootable interface that components can implement so that a raycast can call an OnGetShot message on every IShootable component it hits is a great design. But don't buy into the meme of putting interfaces everywhere, a lot of the time they will just bloat your code. The C# standard library suffers from this bloat. What is the fundamental difference between an IEnumerable<> and a IReadOnlyList<>? I know the answer, you don't have to tell me, as they implement different functionality but it's a very complex heirarchy setup which is difficult to learn when starting the language, and in most everyday cases people will just cast to an IList<> and be done with it.

Also keep in mind: Comments should largely not be used, ever. In the best cases they explain an algorithm that should already be understandable. Usually people use a comment above a function saying "//this adds 2 numbers together" entirely because their function is called NumberCrunch(int input1, int input2) rather than Sum(int a, int b). In almost every case I see in my day job where someone has used comments to explain code, they would have better spent their time making their code more readable rather than using variable names like i and then having to explain what i is used for.

They are genuinely useful in cases where code must (by the very nature of the problem) be complex. Unless you're writing something like a fast inverse square root on a daily basis, these cases are extremely rare (ironically the Q_rsqrt function could have done with more actual comments and less "what the fuck" comments). The far more common really good use case for comments is metadata. If you have to do something in a certain way because of some other system, mention it (such as having to do something strange in order to work around a bug in a third-party API that someone reading just the code won't know about). If you need to conform to some third-party API, by all means add a link to it's documentation in a comment. Comments should be there to augment your code with extra useful information that is relevant to, but not contained within, the code. They are not there to explain the code, as the code is already designed to be a human-readable format for understanding the logic of the problem (the common misconception is that code is written in a way which is designed to be understandable to computers. That is false. Programming code is designed to be easy and efficient to understand for humans. We use a compiler to make it readable by computers). Using a comment to explain hard to understand, badly written code is like writing a book to explain what a previous, difficult to understand book actually meant. Just rewrite the first book!

Good god man, what a wall of verbal diarrhea! Just kidding, I kinda agree with much of it. I've always hated design patterns for example, not because they are always a bad idea (sometimes they are a great solution for some problems), but because they tend to put programmers (especially novice ones) into this mindset of "I gotta use all of these design patterns for every little shit", and yeah, that leads to horrific ugly-as-fuck needlessly obfuscated code. If you also think about writing code like writing literature (yes, I know they are different things), design patterns would be the loose equivalent of filling your prose with other people's paragraphs, which takes all the fun out of it. Design your own shit (unless you are stuck), it's a lot more fun. The great thing about doing it this way is as you gain more experience, you will independently arrive at some of the design patterns, and that will feel a lot better than just mindlessly trying to insert them into your code everywhere.
 
Joined
Jan 5, 2021
Messages
413
Good god man, what a wall of verbal diarrhea! Just kidding, I kinda agree with much of it. I've always hated design patterns for example, not because they are always a bad idea (sometimes they are a great solution for some problems), but because they tend to put programmers (especially novice ones) into this mindset of "I gotta use all of these design patterns for every little shit", and yeah, that leads to horrific ugly-as-fuck needlessly obfuscated code. If you also think about writing code like writing literature (yes, I know they are different things), design patterns would be the loose equivalent of filling your prose with other people's paragraphs, which takes all the fun out of it. Design your own shit (unless you are stuck), it's a lot more fun. The great thing about doing it this way is as you gain more experience, you will independently arrive at some of the design patterns, and that will feel a lot better than just mindlessly trying to insert them into your code everywhere.

I actually wouldn't disagree about it being verbal diarreah. I just sort of threw everything out there in a rant style (since it was late) rather than taking the time to really structure everything, remove fluff sentences, and generally clean up my post. Which is ironic given that it's about clean code.

Design patterns can indeed be extremely good solutions. As can established algorithms. But as you rightly point out, whichever design pattern is popular at the time will be the "correct" way to program for a lot of people, until a new one becomes popular, then they forget the old one, and consider everything made with the old one to be "unmaintainable and out of date legacy code", because they never wrote good code to begin with. Nowhere is this more true than in the web space (I am convinced that 100% of web developers are completely incompetent if not outright frauds, but that's a rant for another day), where there seems to be a new "revolutionary" javascript framework every other week, and it always comes with some "new" design paradigm that just ends up being more boilerplate for no real benefit.

Comparing code to literature isn't that different after all. There's a reason for the old adage "good code should read like good prose".

And yeah, everyone is expected to make mistakes and write shitty code from time to time. Coding is hard, inherently, and nobody (even the experts) is particularly good at it. Humans are just not good at coding. But you can look for smells. One of them (which is usually considered a feature, not a smell) is when you have a IDelegateClassInstancerFactory or some other meaningless class names littering your codebase. If it's not a real concept with a real responsibility, ditch it. Most design patterns (and especially anti-patterns) will encourage the opposite. It's why I hate service-locator and singleton so much (seriously, NEVER use singletons. Not even in rare exceptions. They are literally completely useless in 100% of cases. There are no niches or special cases, just avoid them at all costs. They are kryptonite to good code. Even if people only skim this thread and don't bother learning anything, if they at least get the gist and as a result one time for one random project decide not to use a singleton one time, I will consider all of this worth it.)
 
Last edited:

Lutte

Dumbfuck!
Dumbfuck
Joined
Aug 24, 2017
Messages
1,969
Location
DU's mom
SomeGuy:

Sometimes, less is more. Yes, you can do what interfaces do with C++'s inheritance systems. But languages that restrict inheritance (particularly, inheritance of implementations, which can lead to the diamond problem) do it for a reason. That same reason also explains whenever a language doesn't have operator overloading (which can lead to unreadable code in the hands of someone who uses this stuff outside of obvious math like operations) and other features that can quickly become.. misfeatures in the hands of incompetent programmers working in large teams where 50+ people need to be able to read whatever shit the colleague wrote.

Your rant has some good points.. when interpreted outside of the context of programming language design. Because within the context... the language designers understand the issues you raise and they think the language is better off without the features for the audience they target.

Java was designed to be usable in the hands of morons.

To quote one of Java's language designers, Gilad Bracha, on why they wouldn't implement anything that could resemble a macro system :

Nevertheless, we don't plan on adding a macro facility to Java any time soon. A major objection is that we do not want to encourage the development of a wide variety of user-defined macros as part of the Java culture.
He goes on further to explain what he means by Java Culture:

The advantages of Java is that it easily serves as a lingua franca - everyone can read a Java program and understand what is going on. User defined macros destroy that property. Every installation or project can (and will) define its own set of macros, that make their programs unreadable for everyone else. Programming languages are cultural artifacts, and their success (i.e., widespread adoption) is critically dependent on cultural factors as well as technical ones.
We are catering to the Java culture, while trying to manage things well on the technical side at the same time. In general, once can contrast the Scheme-like philosophy of using a small number of very general constructs, with the more mainstream approach of having a great many highly specialized constructs, as in C or Modula style languages.
Java is clearly in the latter camp. Most Java developers are happy to have dedicated, narrowly focused solutions that are tailored to a specific problem. I am keenly aware of the drawbacks of such an approach, but I don't see it changing very quickly.

Mind you, people who work on such language designs aren't moron themselves (Go, which is basically the new java, was created by Ken Thompson and Rob Pike), but they unabashedly cather to morons.
Any argument made that X feature is just a weaker version of Y, or how you could do a better, more powerful abstraction is beside the point in the views of those who design those languages : they are intently trying to eschew as much as humanely possible any feature that can lead to code that takes even 1% more effort to read. Languages like Java are verbose but one can't deny that you more or less know exactly what a piece of code in front of your eyes does without having to go 10 layers deep into the other functions it calls, objects it constructs and so on.

The same can't be said of code written by people who have fun with C macros and pointer arithmetics, or C++ template meta programming.

As for C++ itself.. it's a good language that has its uses, it's easier to write high performance programs in it (because few of its abstractions have noticeable costs), but it has grown into a massive behemoth, most C++ developers don't really know all the features of the language spec and large companies have as large styleguides that prohibit the use of like a third of the language spec.

Google's style guides for C++ for example prohibit the use of exceptions, RTTI, multiple inheritance and severly discourage raw pointers, macros, type inference (avoid obvious repetitions, but don't replace every type declaration with auto), operator overloading outside of very specific uses. Most corporate styleguides for C++ focus more on features that aren't allowed, or features you should think twice and thrice before using (like template metaprogramming, the google styleguide even has a "boost libraries leads to unreadable code, but some of the libraries are useful, here they are listed, do not use the libraries unmentioned:").
Of all the PL in the world, I think C++ is the only one that has so many styleguides out there dedicated to the the ban on the use of its features.

'nyhow, java and c# are old news, in their attempt to curtail verbosity they have grown too much in features for the morons they were designed for, hence the language Go, which is even simpler conceptually and more straightjacketed than the first release of Java. Life is a repeating cycle, Go is finally introducing generics after doing without and saying they shouldn't exist for a whole decade since its creation, 20 years from now Go will ressemble C# and another language will arise to target the corporate drone.

The linux kernel will probably still be written in C and most useful desktop software like browsers, office suites, image editors, video editors will still be written in C++.
 

Jasede

Arcane
Patron
Joined
Jan 4, 2005
Messages
24,793
Insert Title Here RPG Wokedex Codex Year of the Donut I'm very into cock and ball torture
I hate working with Java so much.

That's all I wanted to write...

At least it pays well. :dealwithit:
 

Lutte

Dumbfuck!
Dumbfuck
Joined
Aug 24, 2017
Messages
1,969
Location
DU's mom
I hate working with Java so much.

That's all I wanted to write...

At least it pays well. :dealwithit:
Java isn't the most pleasant thing.. but I have a suspicion you probably wouldn't want to work with C++ too much either if that C++ was written by the same people who use Java today. Unless you're a lone wolf maintaining a small program/module, you gotta deal with other people's mess, and when that mess looks like a four level of indirections pointer I'm getting outta here.

Java is a verbose, irritating platform, but it's one that stops certain types of absolutely retarded pseudo cleverness. The most annoying shit about it is some of the design pattern abuse, worst offenders : factories and dependency injection.
 

Hobknobling

Learned
Joined
Nov 16, 2021
Messages
358
SomeGuy:

Sometimes, less is more. Yes, you can do what interfaces do with C++'s inheritance systems. But languages that restrict inheritance (particularly, inheritance of implementations, which can lead to the diamond problem) do it for a reason. That same reason also explains whenever a language doesn't have operator overloading (which can lead to unreadable code in the hands of someone who uses this stuff outside of obvious math like operations) and other features that can quickly become.. misfeatures in the hands of incompetent programmers working in large teams where 50+ people need to be able to read whatever shit the colleague wrote.

Your rant has some good points.. when interpreted outside of the context of programming language design. Because within the context... the language designers understand the issues you raise and they think the language is better off without the features for the audience they target.

Java was designed to be usable in the hands of morons.

To quote one of Java's language designers, Gilad Bracha, on why they wouldn't implement anything that could resemble a macro system :

Nevertheless, we don't plan on adding a macro facility to Java any time soon. A major objection is that we do not want to encourage the development of a wide variety of user-defined macros as part of the Java culture.
He goes on further to explain what he means by Java Culture:

The advantages of Java is that it easily serves as a lingua franca - everyone can read a Java program and understand what is going on. User defined macros destroy that property. Every installation or project can (and will) define its own set of macros, that make their programs unreadable for everyone else. Programming languages are cultural artifacts, and their success (i.e., widespread adoption) is critically dependent on cultural factors as well as technical ones.
We are catering to the Java culture, while trying to manage things well on the technical side at the same time. In general, once can contrast the Scheme-like philosophy of using a small number of very general constructs, with the more mainstream approach of having a great many highly specialized constructs, as in C or Modula style languages.
Java is clearly in the latter camp. Most Java developers are happy to have dedicated, narrowly focused solutions that are tailored to a specific problem. I am keenly aware of the drawbacks of such an approach, but I don't see it changing very quickly.

Mind you, people who work on such language designs aren't moron themselves (Go, which is basically the new java, was created by Ken Thompson and Rob Pike), but they unabashedly cather to morons.
Any argument made that X feature is just a weaker version of Y, or how you could do a better, more powerful abstraction is beside the point in the views of those who design those languages : they are intently trying to eschew as much as humanely possible any feature that can lead to code that takes even 1% more effort to read. Languages like Java are verbose but one can't deny that you more or less know exactly what a piece of code in front of your eyes does without having to go 10 layers deep into the other functions it calls, objects it constructs and so on.

The same can't be said of code written by people who have fun with C macros and pointer arithmetics, or C++ template meta programming.

As for C++ itself.. it's a good language that has its uses, it's easier to write high performance programs in it (because few of its abstractions have noticeable costs), but it has grown into a massive behemoth, most C++ developers don't really know all the features of the language spec and large companies have as large styleguides that prohibit the use of like a third of the language spec.

Google's style guides for C++ for example prohibit the use of exceptions, RTTI, multiple inheritance and severly discourage raw pointers, macros, type inference (avoid obvious repetitions, but don't replace every type declaration with auto), operator overloading outside of very specific uses. Most corporate styleguides for C++ focus more on features that aren't allowed, or features you should think twice and thrice before using (like template metaprogramming, the google styleguide even has a "boost libraries leads to unreadable code, but some of the libraries are useful, here they are listed, do not use the libraries unmentioned:").
Of all the PL in the world, I think C++ is the only one that has so many styleguides out there dedicated to the the ban on the use of its features.

'nyhow, java and c# are old news, in their attempt to curtail verbosity they have grown too much in features for the morons they were designed for, hence the language Go, which is even simpler conceptually and more straightjacketed than the first release of Java. Life is a repeating cycle, Go is finally introducing generics after doing without and saying they shouldn't exist for a whole decade since its creation, 20 years from now Go will ressemble C# and another language will arise to target the corporate drone.

The linux kernel will probably still be written in C and most useful desktop software like browsers, office suites, image editors, video editors will still be written in C++.
I wouldn't categorize Java and C# as the same thing. The latter is obviously superior at this point. I do agree with the point about bloat and I suspect there are language developers at Microsoft who get paid bonuses based on how many new things they manage to add to the language. Too many basic operations can now be done multiple ways which is bad for readability and backwards compatibility in future.

My experience with dynamically typed languages is that they will eventually lead to a very expensive and tedious rewrites since proper refactoring is absolute hell when you can't lean on the types. Dynamic typing was an experiment that failed, let's take type inference with us and move on.
 
Joined
Jan 5, 2021
Messages
413
I hate working with Java so much.

That's all I wanted to write...

At least it pays well. :dealwithit:
Same.

Literally every feature of Java is broken in some way. The only awesome thing about Java is probably their enum classes. I'm very lucky that I was able to get out of my Java job and into a C# Unity gamedev job. C# is basically just Java but better anyway.

It's why I chuckled at the guy above who was like "they put in interfaces for a reason". Yes. That reason was incompetence.

Also, controversial opinion time (or, even more controversial opinion time): "Add Getters/Setters" features in editors are a mistake and you shouldn't use them as they encourage you to bloat your classes with needless functions that undermine your abstraction and essentially make your classes into data objects with direct access to their fields. In general you (for the most part) shouldn't be using getters and setters (or any amalgomation of them such as properties). Getters (and properties) I can understand, and I use them occasionally for getting data out of a class sometimes, but setters are almost always entirely useless. The interface to your class should be driven by it's functionality, not it's data. So you should tell a library to OrganizeBooks, not "SetOrganised" to true. Setters usually provide no opportunity for expansion, are horrible for encapsulation, and once you add a getter and a setter for a class member it may as well be a public variable, since it's the same thing.

The reason this is relevant is because during my Java job it seemed that every single class I worked with was basically a bag of data with a very thin veneer of functionality on top of it. 90% of the functions in any given class were for setting or reading a piece of data - effectively just getters and setters. All the real work was done in monumental and horrendously disorganised service classes. It was a nightmare to work with. The service model generally sucks if not done perfectly, but this was even worse than usual. They used Spring though, so the code was already going to be bad (Spring is based on Singletons - which makes it horrible by default because Singletons are nothing but an anti-pattern).

I don't know if that's a problem with Java coders in general, or if this place was just particularly shitty. I have very little respect for Java developers normally because I find that the "Java community" mostly embraces awful frameworks and terrible coding styles, and nobody ever stops to think "hey why does this framework suck so much?"
 
Last edited:
Joined
Dec 17, 2013
Messages
5,182
Java was designed to be usable in the hands of morons.
Java (and C# and other languages like that) are meant for large corporate teams, with varying levels of developer quality and a lot of dull, routine work. It's not a fun language to work with, because the base language is purely functional rather than elegant, and has a lot of stuff that takes way too many steps to do (though they have been improving this over the years). And then on top of that, you have all the frameworks that a typical modern Java project must use (e.g. Spring, Hibernate, EJB, etc), and they add a ton of boring overhead to an already boring base language. The average corporate Java or C# programmer probably spends like 80% of their time on boilerplate code, which severely limits their ability to try anything new, experiment, play around. Oh you want to try this approach instead? Well good fucking luck refactoring all that plumbing code, the configuration files, the annotations, etc.

I am not nearly good enough at theoretical CS to understand whether or not this current corporate approach is a good idea (ie a necessary evil) or a terrible idea, but I do know from first hand experience that it's just about the most boring programming work.

By comparison, working on smaller projects in Python or Ruby (and perhaps other more elegant languages that I don't have direct experience with) is pure joy. Fuck dependency injection, fuck corporate frameworks, write a program that people think will take weeks in one day, and smile. :)
 

Jasede

Arcane
Patron
Joined
Jan 4, 2005
Messages
24,793
Insert Title Here RPG Wokedex Codex Year of the Donut I'm very into cock and ball torture
L
I hate working with Java so much.

That's all I wanted to write...

At least it pays well. :dealwithit:
Same.

Literally every feature of Java is broken in some way. The only awesome thing about Java is probably their enum classes. I'm very lucky that I was able to get out of my Java job and into a C# Unity gamedev job. C# is basically just Java but better anyway.

It's why I chuckled at the guy above who was like "they put in interfaces for a reason". Yes. That reason was incompetence.

Also, controversial opinion time (or, even more controversial opinion time): "Add Getters/Setters" features in editors are a mistake and you shouldn't use them as they encourage you to bloat your classes with needless functions that undermine your abstraction and essentially make your classes into data objects with direct access to their fields. In general you (for the most part) shouldn't be using getters and setters (or any amalgomation of them such as properties). Getters (and properties) I can understand, and I use them occasionally for getting data out of a class sometimes, but setters are almost always entirely useless. The interface to your class should be driven by it's functionality, not it's data. So you should tell a library to OrganizeBooks, not "SetOrganised" to true. Setters usually provide no opportunity for expansion, are horrible for encapsulation, and once you add a getter and a setter for a class member it may as well be a public variable, since it's the same thing.

The reason this is relevant is because during my Java job it seemed that every single class I worked with was basically a bag of data with a very thin veneer of functionality on top of it. 90% of the functions in any given class were for setting or reading a piece of data - effectively just getters and setters. All the real work was done in monumental and horrendously disorganised service classes. It was a nightmare to work with. The service model generally sucks if not done perfectly, but this was even worse than usual. They used Spring though, so the code was already going to be bad (Spring is based on Singletons - which makes it horrible by default because Singletons are nothing but an anti-pattern).

I don't know if that's a problem with Java coders in general, or if this place was just particularly shitty. I have very little respect for Java developers normally because I find that the "Java community" mostly embraces awful frameworks and terrible coding styles, and nobody ever stops to think "hey why does this framework suck so much?"
Hahaha at my current company they use Lombok which adds fucking getters and setters to everything. That you can't see because they're added during compile time...
 
Joined
Jan 5, 2021
Messages
413
Hahaha at my current company they use Lombok which adds fucking getters and setters to everything. That you can't see because they're added during compile time...
Compile time, or runtime?

I know Java LOVES it's reflection. Another feature which is designed to fuck up entire codebases.

My old company used Lomboc as well. Was super confisuing how I could use GetThing when the Thing class only had private members.

Why even bother making members private if they get compile-time generated "free" getters and setters?

See what I mean about every feature of the language being broken? I know Lomboc is not technically part of the language, but it's a core part of the community around the language. They have a framework designed to "make life eaiser" which undermines encapsulation entirely because the Java community has adapted the horrible Getter/Setter antipattern thanks to Beans and now everyone has to conform to it.

The world would be a better place if Java never existed.
 
Last edited:

Jasede

Arcane
Patron
Joined
Jan 4, 2005
Messages
24,793
Insert Title Here RPG Wokedex Codex Year of the Donut I'm very into cock and ball torture
Compile time. Lombok let's you add shit like
@Data or
@Log

And it'll add the boilerplate getters and setters, toString, equals, hashcode, constructor etc. or the logging boilerplate.

Many swear by it to reduce having to write boilerplate code but I'm of the opinion that if you're writing too much boilerplate you need to rethink your design.

It fucking sucks having that shit technically "not exist" until it compiles. I don't trust it and don't use it but many do. My own philosophy? Don't add shit you don't need, ever.
 
Joined
Jan 5, 2021
Messages
413
Yeah, usually boilerplate is a sign of bad design, I agree.

If you need getters for all the variables in a class, it's not a class. It's a bag of data, and you need to completely redo your design.
 

Tavar

Cipher
Patron
Joined
Jun 6, 2020
Messages
1,055
Location
Germany
RPG Wokedex Strap Yourselves In
Fortunately, Lombok should be obsolete now given that records exist. Still, it probably would've been better to just introduce a struct keyword which gets rid of the ridiculous convention of private members with public accessors completely. But as many things with Java this is mostly a convention issue: I rarely write getters for my classes and never provide setters. I'd get rid of the getters as well, but this would confuse my coworkers too much, so I stick to the convention even if I don't find it helpful.

That said, it is beyond ridiculous to claim that every Java language feature is broken like SomeGuyWithAnOpinion does. Explain to me, for example, how lambdas in Java are broken. It is easy to criticize Java, but I'm happy that I work with Java instead of, e.g., C++ simply because of how retarded many of my coworkers are. At least Java limits the amount of havoc they can cause.

In any case I think that what programming language you choose doesn't really matter in the grand scheme of things. You can build great software in any language and vice versa.
 

Jasede

Arcane
Patron
Joined
Jan 4, 2005
Messages
24,793
Insert Title Here RPG Wokedex Codex Year of the Donut I'm very into cock and ball torture
Yeah but sometimes you're stuck with Java 1.8 or 11. Records are a big improvement and personally, I like the lambdas as they can be more readable at times.
 
Last edited:

Hag

Arbiter
Patron
Joined
Nov 25, 2020
Messages
1,687
Location
Breizh
Codex Year of the Donut Codex+ Now Streaming!
Also, controversial opinion time (or, even more controversial opinion time): "Add Getters/Setters" features in editors are a mistake and you shouldn't use them as they encourage you to bloat your classes with needless functions that undermine your abstraction and essentially make your classes into data objects with direct access to their fields. In general you (for the most part) shouldn't be using getters and setters (or any amalgomation of them such as properties). Getters (and properties) I can understand, and I use them occasionally for getting data out of a class sometimes, but setters are almost always entirely useless. The interface to your class should be driven by it's functionality, not it's data. So you should tell a library to OrganizeBooks, not "SetOrganised" to true. Setters usually provide no opportunity for expansion, are horrible for encapsulation, and once you add a getter and a setter for a class member it may as well be a public variable, since it's the same thing.
Could be linked with how IT is taught. If you don't have quality lessons on how to leverage OOP strengths then the intuitive way is to cut short through the mandatory class boilerplating and add getters and setters everywhere so you can resume writing in pseudo C.
 

J1M

Arcane
Joined
May 14, 2008
Messages
14,629
Good god man, what a wall of verbal diarrhea! Just kidding, I kinda agree with much of it. I've always hated design patterns for example, not because they are always a bad idea (sometimes they are a great solution for some problems), but because they tend to put programmers (especially novice ones) into this mindset of "I gotta use all of these design patterns for every little shit", and yeah, that leads to horrific ugly-as-fuck needlessly obfuscated code. If you also think about writing code like writing literature (yes, I know they are different things), design patterns would be the loose equivalent of filling your prose with other people's paragraphs, which takes all the fun out of it. Design your own shit (unless you are stuck), it's a lot more fun. The great thing about doing it this way is as you gain more experience, you will independently arrive at some of the design patterns, and that will feel a lot better than just mindlessly trying to insert them into your code everywhere.

I actually wouldn't disagree about it being verbal diarreah. I just sort of threw everything out there in a rant style (since it was late) rather than taking the time to really structure everything, remove fluff sentences, and generally clean up my post. Which is ironic given that it's about clean code.

Design patterns can indeed be extremely good solutions. As can established algorithms. But as you rightly point out, whichever design pattern is popular at the time will be the "correct" way to program for a lot of people, until a new one becomes popular, then they forget the old one, and consider everything made with the old one to be "unmaintainable and out of date legacy code", because they never wrote good code to begin with. Nowhere is this more true than in the web space (I am convinced that 100% of web developers are completely incompetent if not outright frauds, but that's a rant for another day), where there seems to be a new "revolutionary" javascript framework every other week, and it always comes with some "new" design paradigm that just ends up being more boilerplate for no real benefit.

Comparing code to literature isn't that different after all. There's a reason for the old adage "good code should read like good prose".

And yeah, everyone is expected to make mistakes and write shitty code from time to time. Coding is hard, inherently, and nobody (even the experts) is particularly good at it. Humans are just not good at coding. But you can look for smells. One of them (which is usually considered a feature, not a smell) is when you have a IDelegateClassInstancerFactory or some other meaningless class names littering your codebase. If it's not a real concept with a real responsibility, ditch it. Most design patterns (and especially anti-patterns) will encourage the opposite. It's why I hate service-locator and singleton so much (seriously, NEVER use singletons. Not even in rare exceptions. They are literally completely useless in 100% of cases. There are no niches or special cases, just avoid them at all costs. They are kryptonite to good code. Even if people only skim this thread and don't bother learning anything, if they at least get the gist and as a result one time for one random project decide not to use a singleton one time, I will consider all of this worth it.)
I like your rants and I generally agree, but I do have to call out that in game development there are times when singletons make sense. For example, if it's a preferred pattern in the game engine you are using to have global access to something like a Settings or Game State object that is auto-loaded.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom