Scientists rely increasingly on models trained with machine learning to provide solutions to complex problems. But how do we know the solutions are trustworthy when the complex algorithms the models use are not easily interrogated or able to explain their decisions to humans?
Click here for original story, Using ‘counterfactuals’ to verify predictions of drug safety
Source: Phys.org