We do not know how to align a very intelligent AI agent's behavior with human interests. I investigate whether—absent a full solution to this AI alignment problem—we can build smart {\ai} agents which have limited impact on the world, and which do not autonomously seek power. In this thesis, I...