In my opinion how you see AI is a projection of feelings we are unconscious of having.
What, if anything, do you think the conscious mind does? It seems you answer any and all questions with 'the unconscious mind'. It leaves to to question if the concious mind plays any role at all in your philosophy.
Maybe they should be changed from "a robot" to "any AI"
I'm fairly sure that the idea is that any AI will be a robot, almost by definition. Remember, robots don't have to look like people, that is an android, they only have to have some ability to modify its surroundings, real or virtual. Some examples of real and fictional robots include KITT from Knight Rider, Hal 9000 from 2001: A Space Odyssey, and the toilet that watches you pee so it can flush when you are done, and the paper towel dispenser that rolls out the next piece when you tear off the current one. Never forget, robots are watching you pee.
Why would anyone be concerned about the ethical treatment of something that doesn't exist? This is less important than the make up of pocket lint. Butterfly farts are going to have more effect on the world than AI.
I think the idea is that because our society is so computationally dependant that as soon as the first AI is born it is going to become super powerful, so we better have already devised a way to keep it from hating us.
Personally I think this is pure science fiction. I think that if we do invent an AI it is going to be a moron. It will take many years, and many iterations to make a super powerful AI, if we ever do.
I think we should be much more concerned with our complete inability to treat each other ethically rather than worrying about a maybe scenario about a super powerful computational intelligence. Honestly if we can't treat other people that are 99.99% identical to us with kindness what chance do we ever have of treating something so unlike us with even a modicum of compassion?