Because you are playing them right.
In a zero-sum game success breeds success, and you can't avoid it without blatant cheating on behalf of AI or frustrating restrictions ("punishment for success") that kill any joy from progressing in the game.
This is the crux of the issue. Also, what
The Brazilian Slaughter said regarding easy to manage empires that ignore entropy. This comes with a problem of definition: is the strategy game in question a simulation or a sort of electronic "board game" in which you play quick matches against adversaries?
If you want to play a quick match, then you need clear victory conditions and a consensual, that is, 'narrow', set of rules and procedures. In a match of chess, a superior player will win every time and his victory can be dissected by retracing his moves, while the losing player can similarly observe and learn from his own mistakes in an objective, impersonal way, as if they represented some universal chess player. This is allowed by the logic of the game, which is highly deductive, that is to say, universal. Every "moment" of the game can be isolated and studied as a set piece by the detached observer. And when one talks about deduction, one speaks about being absolutely right or wrong, which extends to the victory conditions, which are naturally zero-sum. This is intrinsic to the game. Any attempt to add granularity by means of simulationist, realistic or roleplaying additions would tarnish the elegance of the game considerably.
However, the domain of strategy proper is much larger than solving logical puzzles or making genius moves five steps ahead, based on assumptions. We don't need to get too involved in definitions. We can look at world politics in a conceptual framework and ask ourselves what would traditional concepts such as "winning" and "losing" mean in this context. No one ever though that winning meant that you had to eliminate every opposition, for example. In Rome TW you could paint the entire map red, while historically this was obviously impossible (some games try harder than others at hiding their "board game" nature). Why this was, and continues to be impossible is an open question, but certainly the reasons are varied, and some of them may be inextricably linked to human nature.
Let's be clear, though. If you're going to simulate imperial expansion, it really wouldn't make sense to depict a single "entropic" variable that would scale in relation to your territory. Sometimes the opposite, in which larger entities (even "too large) are more stable, is true--imagine the chaos that would erupt if big historical territories like China, Russia, etc. had to deal with breakaway "provinces". The reality of the matter is always layered and often paradoxical.
In other words, there's no way to simulate such complex realities in a consensual, easily definable way. It soon turns into a "project" where people use computers to paint abstractions and "fill in the blanks", much like our universally praised climate models. Every setting requires different rules and variables to depict whatever process it is imagining, and each game must possess its own metrics for success, if it needs those at all. I really liked playing Victoria 2 for this reason, because it allows you to
roleplay(that's the keyword here) basically any scenario that is historically plausible.
I mentioned Victoria 2 for its openness, but it's obviously a very limited game as far as strategy is concerned. How to make such a game challenging, apart from generic improvements to the AI, which most games would benefit from? This is entire point of the thread, which only now I'm coming to.
Obviously, you CAN add more granularity. What happens if you add a province to your empire? Does it make the game easier or harder? If the game becomes easier, is it plausible that it should be so? Apply the same reasoning to every aspect of the game. Is it plausible that you should have political stability for 50 years, that no one tries to kill the king, that no general rebels, that you never have a bad harvest, that peoples's loyalty doesn't change, that people will live in the same place and have the same jobs, etc.
The problem with granularity and complexity, in general, is that the AI needs to become exponentially more competent. You can program a computer to win at chess, but you can't make it understand the Middle East, even if it's Fantasy Middle East (I should patent this idea before some 4X company gets to it).
My other, more economical idea, is to have the AI behave sort of like the aliens in the They Live movie. What do I mean by this is that the AI should be very good at doing AI things and recognizing AI behavior in others, while being extremely sensitive to things that fall outside what it deems to be "normal" behavior. If the player suddenly starts behaving like a human being who is aware of their existence, they should become dispositionally hostile towards him. You could invert the analogy, to humans who find out about aliens, whatever. The point is, the AI should be more aggressively conformist and punish the player who is too smart for his own good. This is not different from any political system, where upstart powers (even very small) who grow too quickly will arouse the suspicion and, eventually, hostility from the status quo powers. What you have now, instead, is the pseudo-Hobbesian system of every AI for itself, which is highly exploitable by the player. The player should, instead, be more encouraged to hide his own power or to postpone his victories until he secured wider support.