Quality programmers write well structured code with descriptive variable names, where it's not that fucking hard to figure out the variable type. It will take a little bit more time than in a strongly typed languages
Way to miss the point entirely. The problem is not that I can't figure out the types at declaration time, the problem is the variables will change at runtime in mysterious, difficult to debug, and problematic ways, and are very error-prone as a result.
but you will save a lot more time from other positive features of Python.
As you say, business is all about tradeoffs. Develop in python and you will get rapid development. You'll also be stuck with python. Want to have speed at scale? Too bad, you're stuck with python. Need to interact with memory directly? Too bad, you're stuck with python. Need to write testable code that works reliably with any input? Too bad, you're stuck with python (or any other weakly typed language).
Python has it's place. It's excellent for build scripts and little tools for doing things like automating keystrokes in games. But leave the real programming to the grown ups.
This proves my point above. If you have shitty programmers, yes, this can become a problem. But then in this case you will have a lot of problems anyway. Otherwise, it's really not that difficult to work with the dynamic freedom Python gives you.
Have you never in your entire life written code containing a mistake?
All it takes is a one second lapse of concentration and you have an impossible to debug error which the compiler won't warn you about, because it's a "feature".
I'm very fond of strict typing because that makes it possible to perform large-scale refactors in a secure manner... Or so I thought. But then, even in a typed language, just because your stuff still compiles after the refactorings, there are actually zero guarantees that it still does the correct thing. So you must have sufficient automated test coverage, or at the very least you must test manually to make sure your code is not completely broken, even though it compiles just fine. Which brings the *drawbacks* of strict typing into question; namely that often you need to do a lot of busywork just to satisfy the compiler, and at the end of the day, you still need to test your stuff, you can't just go "it compiles -- ship it! woohooo!". Compiler errors and warnings only prevent a small subset of possible problems from happening, but in no way can a compiler guarantee *correctness*. No language can do that, I don't even think Ada can 100% of the time (but I have no actual working experience with it).
The point of strong typing is not to guarantee logical correctness, it's to guarantee consistency. If I have a function taking 2 int values and it adds them together and returns the result, I know that I can pass in any 2 values and will always be guaranteed that the function works the same way, will not throw a runtime error, and will generally behave consistently based on the inputs. Obviously there are caveats - we have to account for nulls and invalid states in many functions (although things like the null object pattern can help with this), sometimes we have to catch possible exceptions, etc, but generally functions can have consistent results, especially in languages like C++ that can provide a no-throw guarantee.
With a weakly typed language, I have no guarantees. If I pass in 2 strings (or a string and an int) it will concat them, if I pass in 2 objects it will do whatever their + operator tells them to do, or throw a runtime error if the type has no + operator defined.
Weak typing gives me no control over what is passed to the function and no guarantee that what is passed will be of a valid type for the comparison.
With strong typing, I can guarantee at COMPILE TIME that I am, at the very least, calling this function in a way that will generate a valid answer. I cannot guarantee that the answers I get back will be utilised correctly, or that I will pass in the correct values, but as a function, I can guarantee that it works. With tests, this can become even better because I am free to refactor without fear of change to the requirements of the resulting function.
I have no such guarantee with weak typing. At any time someone can throw something at my function that may generate a runtime error because it's an invalid type (which means everything needs to be wrapped in exception handling), or they could throw a valid type but not the intended type and turn my Sum function into a Concat function. Worse, we now have undefined behaviour - if bob's code was fetching numbers from a table and passing them to my Sum function, everything will work correctly, until some idiot adds "three" to the table - now when bob calls the function, we have undefined results (likely a concatenation), no error, and invalid data. All because we wanted to use a "nice and easy" weakly-typed language. Data can inform the functionality of my code - that is INSANE! How can I guarantee ANYTHING when my fundamental program logic can be modified by the data passed in by external code!
You are correct in that the compiler won't save me if I write bad code. But it's critically important that the compiler is able to detect these sorts of typing errors so that they don't become runtime errors. Compiler errors will be found by me and fixed before I ship (obviously), but runtime errors are going to be found by my users. As a developer you should ALWAYS prefer compile errors over runtime errors. Doing all that "busywork to appease the compiler" isn't just some unnecessary and annoying boiler plate, it's fundamentally protecting your code from bad inputs and ensuring you're not doing something monumentally stupid.
This is especially important when using inheritance/polymorphism. Polymorphic code is extremely safe in strongly-typed OO languages because of how much the compiler takes care of. I have to know ahead of time which functions I can call and which variables I can query/set, but that's a benefit. In non-polymorphic strongly-typed languages (like C) and weakly typed languages (like all the garbage the zoomers use), polymorphism is an absolute crapshoot, since the compiler has absolutely no idea if a function call is valid or not and can give you no help, especially since weakly typed languages don't really have a concept of polymorphism (and how could they - base types don't exist because types don't exist), so usually you have to resort to passing objects around, calling functions by name, and hoping everything works. And you can't test it because just like with my Sum function there's no way to guarantee someone isn't passing something invalid into the function to cause a runtime error (usually some sort of invalid method error). The worst you can do in a strongly typed language is pass a null reference (and in languages like C#, this is being slowly fixed thanks to nullable support)
Of course this all goes out the window if you throw. My Sum function can return an int, but if it can also throw a RuntimeException well, there goes my type safety. This is why, when using strongly-typed languages, I highly recommend not throwing in any of your core business code, only throw for wacky shenanigans (like database failures), and handle them nearby where they are raised.
Some weakly typed languages (eg PHP) allow you to use "type hints" for things like function arguments, but these barely work, provide no protection, and are a bandaid on top of the problem. PHP has no valid way to handle polymorphism other than "run it and hope for the best", and neither does python from what I recall.
Which language is ran by 3 billion devices? Checkmate, atheists.
Python virgins DESTROYED by Java chads
Also, I hate `int` promotion in C/C++ with a passion. That inconsistent, arbitrary, compiler/platform dependent misfeature should be killed with fire. But it won't be; it's legacy cruft that maybe made some sense on old architectures where all you had was bytes and 16-bit words, or maybe 32-bit dwords, and you just can't break compatibility with those metric tonnes of legacy code out there that run the whole world...
Yes. The other misfeature of C/C++ that I hate is how "dynamic" the / operator is. Oh, you were expecting a proper divide? Jokes on you, have an integer divide. Enjoy your divide by 0 error, bitch!