I thought about Skynet too.
From my own knowledge we're still a while off from a real AI that could grow so fast. The best example I've seen of a learning computer (a must for AI) is Watson which beat both Ken Jennings and Brad Rutter at Jeopardy this February. The idea is that a learning computer takes feedback from it's past actions. It uses what it got right and what it got wrong to better it's decision making. Watson was possible because Jeopardy questions follow a set format (relatively) and that format can be fed information from a vast collection of data. The problem with building an actual AI to imitate a human is programming. The reason we need stronger learning systems is because traditional logic programming can not adequately represent the massive amount of decisions that may come up.
Take this example: Let's say you're out and about and someone says, "that's sick!" Now that statement is meant as a compliment in regards to something, could be your shoes or the way you did your hair. How do you program something using conventional logic to detect sarcasm? That's one of the issues that prevents realistic AI, logic vs. realistic learning programming. We have the CPU power by using a large amount of super computers and we certainly have the storage capacity to represent a human brain.
Also, doesn't the internet and all the networked computers seem like the perfect implementation of a brain using compute power and storage
This may be of some interest:
http://mashable.com/2011/02/18/ibm-watson-research/