We're now discussing a fully autonomous, self-aware AI.
I think it doesn't need to be the case. Let's say some civilization used droids for maintenance, security - whatever, doesn't matter. Then it vanished in one way or another. The robots remained, however, and they had enough energy supplied to last them nearly forever. Slowly but surely, their programming became damaged and glitched, and at last they became aware.
Such droids would start from basically the beginning. They are suddenly self-conscious, but they know nothing of the outside world, nothing at all. The program that was build in is still kicking, to an extent, so they're torn between their newly found awareness and a voice that told them what to do for centuries. It's like a hundred people with complete amnesia found themselves on a bright new space station: wow, where are we? who are we? where do we come from? oooh, shiny! :pushes button: Imagine them finding out that they were created only for sweeping the floors.
These droids would be a very naive, curious group, searching for meaning, answers, only starting to develop what we'd call morality or ethics. At first they would suffer from extremely rigid thinking, but the more they'd find and explore, talk with others, the more they would overpower the remnants of their programming. Their stance, whether they become aggresive terminators or peaceful helpers, might depend on how they are treated in the beginning. Such simple folk (for a lack of a better word) could have been used easily, i.e. to surrender everything useful in exchange for worthless trinkets, making them distrustful and warlike, but treating them as equals would lead to creating a tranquil, cooperative civilization.
though I think it's a better subject for a book than a strategy game :D