Microsoft launches new tools for building fairer machine learning models

Microsoft launches new tools for building fairer machine learning models

At its Build developer conference, Microsoft today put a strong emphasis on machine learning. But in addition to plenty of new tools and features, the company also highlighted its work on building more responsible and fairer AI systems — both in the Azure cloud and Microsoft’s open-source toolkits.

These include new tools for differential privacy and a system for ensuring that models work well across different groups of people, as well as new tools that enable businesses to make the best use of their data while still meeting strict regulatory requirements.

As developers are increasingly tasked to learn how to build AI models, they regularly have to ask themselves whether the systems are “easy to explain” and that they “comply with non-discrimination and privacy regulations,” Microsoft notes in today’s announcement. But to do that, they need tools that help them better interpret their models’ results. One of those is interpretML, which Microsoft launched a while ago, but also the Fairlearn toolkit, which can be used to assess the fairness of ML models, and which is currently available as an open-source tool and which will be built into Azure Machine Learning next month.

As for differential privacy, which makes it possible to get insights from private data while still protecting private information, Microsoft today announced WhiteNoise, a new open-source toolkit that’s available both on GitHub and through Azure Machine Learning. WhiteNoise is the result of a partnership between Microsoft and Harvard’s Institute for Quantitative Social Science.

Source link

Leave a Reply

Your email address will not be published.