We are giving our robots the ability to channel not just algorithmic but phenomenal consciousness . And we may, for better or worse, keep sentience away from them. We may also program their molarity up to utopian perfection. But how do we keep them from observing human aggression and abuse (direct or indirect), which they might absorb and start processing as unsupervised learning models? We better evolve accordingly so as to deserve our own creation.