Legible AI
There is a push for 'legible AI' – that is, implementations of AI where (perhaps after the fact), humans are able to see "why" an AI agent made the decision it did.
I believe this will necessarily require partitioning, meaning it will require us to place meaningful (to us) sentinels inside the AI's state machine. This necessarily imparts a restriction on how the AI functions, and thus on its' abilities. See Conway’s Law.
But the nice thing is that we can build as many AI's as we want, with different sentinels at different locations, run them all in parallel, and compare their results.