Really, you doubt that is ever achievable? Frankly, it's inevitable. I'm not looking at it from a sci-fi perspective, but from one borne out of reality and, hell, even necessity.
As far as I'm concerned, it's guaranteed to happen. I agree in that it may not be in my lifetime, but as I'm not yet 30, that may actually still be possible.
The rate of AI and neural net research virtually assures they'll achieve some breathtaking AI at some point. Hopefully there's some strong implementation of Asimov's Laws, and in the end humans will likely try to limit emotions to more of a cold logical emulation to restrict unwanted emotional reactions, but I am definitely curious to know where that leads. And hell, I can imagine that, in the future, there will be some desire for a little emotional capability to aid in the recognition of necessary decisions. That can likely be achieved with what I call cold and logical emulated emotions, but at a certain point, somewhere, sometime, an AI system is going to have a chance to learn beyond imposed limitations.
I can fully understand why some of the finest minds are have a strong interest in calculated AI research that limits any potential threats to we creators, and perhaps they will find a damn fine way to make that happen. But I also fully suspect that, like all of the information technology world, there will be a way for some pioneering asshole to manipulate the security protocols. If that ever happens, who knows. But hell, I can easily imagine a day where we create an AI to help secure our information systems, and that backfires. A little doomsdayish? Sure, but that's why I definitely agree with creating an extra body of research designed to figure out how the hell to prevent that. It may be a pool of ideas borne out of sci-fi, but as we progress, we certainly have to recognize that reality is catching up with the old sci-fi.