Developers
May 20, 2020

Tackling Responsible AI With Azure Machine Learning

Microsoft’s Azure Machine Learning provides the tools needed for companies to develop responsible, trustworthy artificial intelligence.
Source: Pixabay

Artificial intelligence (AI) has captured the imagination of generations of scientists, dreamers, writers and philosophers, not to mention every bright-eyed boy and girl who grew up watching TV shows with androids, robots and sentient computers.

As technology reaches a point where it is becoming an everyday reality, however, the magnitude of the challenges associated with creating a true AI become more apparent with each attempt. While AI excels at logic-based tasks, it still struggles with basic common sense, not to mention concepts like empathy, compassion, concern and fairness. While these may seem like abstract concepts, they’re vital in the creation of responsible AIs.

Microsoft is attempting to help companies do that with its Azure Machine Learning.

Microsoft’s History With AI

Few companies understand the importance of developing responsible AI as much as Microsoft.

In 2016 the company unveiled “Tay,” an AI chatbot on Twitter. The goal was to better understand how an AI could develop conversational understanding. While the experiment started off innocent enough, within 24 hours Microsoft had to pull the plug.

Unfortunately, Tay quickly turned into a misogynistic, racist chatbot, spewing all sorts of inappropriate things at users who engaged it. While it’s true that some of the worst things it said were from people exploiting the “repeat after me” command it was programmed with, many of the things it said all on its own were still highly offensive and inappropriate.

After shutting it down, Microsoft was left apologizing and explaining that AI development is an iterative process that requires step-by-step learning. Even so, the experiment shone a light on the issues with developing a responsible AI.

The “Fairness” Challenge

While many AI efforts don’t end as badly as Microsoft’s Tay, there are still a myriad of problems that must be addressed for an AI to be trusted by the people who will depend on it. One of the biggest of these challenges is teaching an AI to show fairness—and trusting it to do so.

In human-to-human interaction, a person can usually relate to their fellow human on at least some level. Human beings have an innate sense of fairness that others can relate to and depend on.

Absent that “human connection,” however, AI can seem like a “black box” that is barely understood. To make matters worse, it can be very difficult to properly train an AI to be fair, absent the inherent moral compass that most humans have.

AI relies on machine learning (ML), a subset of AI that focuses on algorithms that improve over time and with experience. Because the algorithm is always changing, growing and adapting, based on the data it encounters, there is always the risk that an AI will become inherently unfair based on the data it processes.

Much like Tay became a misogynistic racist as a result of what it encountered, biased data can causes biases and unfairness in an AI. This, in turn, could cause it to unfairly discriminate against a person or group of people when it comes to hiring, lending, candidate selection and more.

To help deal with this challenge Azure Machine Learning includes tools that help organizations better understand and use their data models. These tools help identify potential bias in data models before they’re used in training, helping ensure fairness is incorporated from the ground up rather than grafted in later.

Azure Machine Learning also supports a variety of open-source tools, including those that help assess an AI’s level of fairness and mitigate any problems that may arise.

The Privacy Challenge

Another significant challenge with responsible AI development is protecting the data the AI has access to. Because AI excels at data management and analysis, it often has access to a gargantuan amount of sensitive and private data. As a result, data privacy is a feature that must be built in to an AI from the ground up.

To help in this regard, Azure Machine Learning features differential privacy. Differential privacy adds “noise,” or randomness to the datasets that are stored and analyzed by an AI. This helps ensure no one person can be tied to a particular dataset, thereby protecting individual privacy.

Shedding a Light On AI

As mentioned, for many companies and organizations, AI has the potential to become a black box; it can become a scary, little-understood technology that neither reaches its full potential, nor is ever truly trusted by the very people relying on it.

Azure Machine Learning aims to address this by shedding a light on the entire process, providing a way to plan, develop, test and audit AI solutions. The platform also provides the governance necessary to ensure all security and regulatory requirements are met.

The Takeaway

AI is here to stay. While it has not yet achieved the promise of science fiction, that day is fast approaching. With it comes the promise and challenges of AI’s transformative nature.

An important part of that transformation is making sure that AI and ML systems are developed responsibly, with fairness and security in mind. Microsoft’s Azure Machine Learning is a valuable tool that can help organizations of all sizes do just that.

Tags Microsoft Azure Machine Learning Artificial Intelligence
Impactio Team
Impactio is America's leading platform of academic impact analytics and reputation management designed for scientists and researchers. Impactio catalyzes global scientific and technological advancement by developing various innovative cloud-based software and services to make scientific communication more effective, ultimately helping scientists and researchers be more productive and successful.

Related Articles