For clarity: The bias is in the training data, not the algorithm per se. The training data reflects the culture which produced it. The bias in the culture might be artificially countered in the algorithm’s implementation but that probably won’t work without unforeseen consequences. To fix the bias in the data one has to fix the originating culture. And that takes generations.