Improving the fairness and reliability of AI solutions
AI is at the heart of technological innovations. However, it is vital that we build solutions that are more fair, trustworthy, transparent, and less harmful. This does not just have an impact on our society, but on the credibility of organizations the build or use AI as well. In this session will cover some of the best practices of debugging models through error analysis, fairness assessment, model behavior explainability and counterfactuals/what-if analysis. In addition, we will illustrate how Azure ML service simplifies how data scientists and developers can improve AI models through its easy-to-use Responsible AI (RAI) dashboard which is built on some of the best open-source tools such as Fairlearn, DICE, InterpreML, EconML etc.