Consciousness is insufficiently understood by human about their brains. In fact, it is not a bad thing. If we would have a superintelligent robot with consciousness, it could kill us in two ways,
- intentionally: it wants to do it and probably by programing itself to do it
- by mistake: bugs by human programmer/designer
On the other hand, if the superintelligent robot without consciousness, we only have one danger to get killed: bugs in its software or fault in hardware. Since computer came to our life, its software/hardware bugs/faults hurt us quite often and sometimes badly. However, we can handle it.
If the superintelligent robot with consciousness is much clever than us and it decides to hurt us intentionally, our choice would be very limited. Therefore, we should forget the consciousness thing which we do not understand anyway and build a superintelligent robot without consciousness, a potentially friendly robot.
Well, an evil programmer can write a code to make the robot do bad things but that would be a human criminal whom we know how to deal with, do we?