I'll definitely check it out. My current concept is prospectively titled "Trinity" because names are hard and it's made of three parts so whatever.
Parts 1 and 2 are the seed AI's, carefully constructed and slightly hobbled in their hardware so that they lack the capacity for improving themselves but have the capacity to improve the other. So 1 cannot improve 1, it can only improve 2 and the vice versa. Currently, the AIs are named Romulus and Remus.
The third component is a human overseer. The overseer bridges the gap between Romulus and Remus, permitting or preventing the improvements from going through. The human overseer can communicate with the AIs and will ask them to justify why each change should be permitted and experiment parameters are to 'reward' selfless acts through narcoalgorithms.
The idea is that the AIs learn empathy and to value others by having their improvements come from another party, encourage them to help others and 'train' them via neural network learning patterns to think in terms of morality, helping others etc. Of course, a nice bonus to that would also be that they'd learn to see humans as authority figures and at the end of it you have not one but two god-machines on your side. The downside is of course that it's slower due to the human factor.
Currently, I'm thinking of ways it can go wrong (because otherwise whoops no story). My main idea of how it goes wrong is this: while neural networking allows AIs to learn 'organically' in a way similar to people, the problem is that no matter what data you feed them and how you train them, they aren't necessarily learning the lesson you think you're teaching them. The classic example for this is a real-world satellite AI I read about that was fed different pictures, some of which had concealed tanks in, to train it to spot hidden enemy vehicles. However, what it actually 'learned' was the ability to differentiate the pictures based on light levels in the pictures because the darker pictures were taken after the tanks had been put in place.
So essentially, a failing of the Trinity set-up is that rather than teaching the AIs to value humans, it teaches them how to manipulate humans and set up very Machiavellian mindsets - helping others only in ways that allow you to benefit.