Tactics [with variations for different enemy types], what is the point?
Does spreading AI behaviors across specialized units somehow make tactics better? Not really. Chess has more kinds of pieces than Go, but this doesn't affect tactical depth. I did not mean to imply that enemy variety would weaken tactical challenge, just that it is irrelevant. I agree that variety and challenge are not in opposition, but they are not related either. You do not need one in order to have the other. So if you think the point of tactics is challenge, then enemy variety should not be relevant.
So, what is the point of tactics?
If you think it is to make the player think, then deep AI seems like a good solution to me.
If you think it is something else, then we should be discussing what exactly.
Ok ok now I get where you're coming from. I agree with you.
I'm not going to go debating about which is deeper: Go or Chess. The other guys can argue with that all they want.
All I'm bringing into the discussion is how to improve a tactics game in theory. I'm not really into about the semantics/definition. And the point of what I'm saying is that both breadth and depth equally make for a better experience for the player in the end.
You *could* make a game where all units are the same (i.e. Go), but well, I'm just not into that.
----
Ok about the actual AI, I'll share a few things about how programmers do it.
One way to do this that I have always preferred, is to vary the ability of the enemy AI to simulate the battle plan.
...
The enemy AI considers the current battle lineup and simulates all the moves it can make within that turn and estimates the outcomes. Then it tries to simulate what the Player will do and compares the result for the given "turn" that includes player and enemy moves.
Anticipating what the player will do, that's typical of Chess AI actually.
But what I want to note here is that letting the AI create a series of steps to do, in the first turn alone, before he even moves any of his units, that's what "planner" types of AI do. "Planner" is an umbrella term and there are many implementations about it.
The first significant one was G.O.A.P. (Goal Oriented Action Planning). This was the AI used in that FPS game F.E.A.R., Empire Total War, and Fallout 3.
Then there's H.T.N. (Hierarchical Task Network), used in Killzone, and those recent Transformer games.
And finally there's Behaviour Trees, the ones used in Halo 2 (and presumably onwards in the series), all the Crysis games, Spore, and probably a bunch of others I don't know.
The problem to note here is that even though the AI can make a plan, it also has to worry about being able to make contingency plans (make a plan B if plan A didn't work). Some implementations struggle with this more than others.
If you guys wanna know the nitty-gritty details on that I made a talk about it before (PDF slides):
https://drive.google.com/file/d/0B2Y7jOiWWy4MMDhVeE1GclBaLVU/edit?usp=sharing
Or you could just Google those things I mentioned.
----
If the AI is smart, it chooses the optimal solution i.e. it chooses the answer where it gets the least "damage" and payer gets most. If it is "Dumb" it chooses enemy does most damage (suicidal AI) regardless of damage received. If it is cowardly it chooses the least damage to itself option etc.
This idea of using the optimal solution, in practice, games so far (AFAIK) can only do these reasonably for the short-term. Meaning AI can indeed choose an optimal action to do but it can only reliably choose the best thing to do for this turn only. This is from what I hear, I haven't really tried, to be honest. I guess it's too much strain on a computer to make it think of an optimal solution in the long term always (because the problem I think is taking contingency plans into account: too much permutations).
They call these types of systems "Utility" AI. There's another type of AI system that performs the same, called, decision trees. Not sure if those two are really different in practice. Depends on your particular implementation I guess.
The basic idea here is to make the AI list out all the actions it can do at the moment, as in, everything, charge, retreat, flank, snipe, use his buff ability, whatever. Then assign a "usefulness" score on each action based on "guesses" on the current situation. Then it simply chooses the one with the highest score.
How it actually comes up with that usefulness score, well, it pretty much depends on the formulae you put (which should take the game's rules into account).
Some games also put a "preference" score in addition to the "usefulness" score, to account for various AI personalities, and doing that lets designers tweak the AI personality easily by just changing the preference values.
----
This dichotomy between thinking on your feet (and choosing an immediate gain), VS making investments (losing a little bit at the start with the promise of much better gains later on) and planning for the long term, reminds me of Rommel VS Montgomery in the African theater of WW2.
Rommel was a fast thinker (and did well for the limited supplies given to him), while Montgomery was a methodical planner. Of course, Rommel lost but don't let that make you think that thinking on one's feet is inferior. There were more factors for that battle's results than the personalities of the generals on either side (morale, limitation of ammo & supplies, availability of air support, arguably even political meddling, etc.).
----
You do, of course, have your conventional "finite state machine" that you could use but that pretty much fell out of favor for use in AI (still used in other parts of a game though).
----
For example you can allow the really smart villains in the game to use the past information against you.
If you want this to be automated, I think the keyword you need to Google is "machine learning". The idea is an AI that starts dumb (relatively) and learns from its mistakes automatically each time until it gets really good. I think the pet character in "Black & White" uses that, not sure.
This is a bit related to evolutionary AI, meaning a survival of the fittest kind of AI creation: make a bunch of dumb AI randomly. The ones who perform at least a little good gets to stay, the rest are deleted. Then among the remaining ones, you create new combinations. Repeat the process, the good performers stay, the rest deleted. Until you get to a point where the AI is pretty good.
For the most part, I think it's really hard to "guide" AI like this so it could be fun for the player. It could end up just being too annoyingly smart (i.e. troll) if not done properly.
However I do think it's a good tool for you to fish out overpowered tactics in your game (because the evolved AI will simply exploit any cheesy tactic), to help you tweak the rules. So if you don't have a legion of playtesters to do QA for your game, this will do in a pinch. Check out
http://aigamedev.com/open/interview/evolution-in-cityconquest/