Fair and Impartial

FAIR AND IMPARTIAL

As we deploy machines that can perform analysis and even make decisions on our behalf, we require confidence that the machine intelligence is fair and impartial in its outputs. The challenge of course is that AI cannot “think” or “reason.” The onus falls on human stakeholders to probe how an AI functions relative to expectations for fairness and concerns over bias.

There are many scenarios in which an AI model could lead to unfair and biased outcomes. They might come in the form of automated decisions or they could influence human decision making in a way that treads across expected lines of impartiality. For AI to be trustworthy, every stakeholder, from the data scientist to the consumer, needs confidence that the model outputs are equitable.

The core issue is addressing bias. Bias can refer to the difference between the predicted and actual output values from the training data. It might also mean that the data reflects bias that is embedded in society. And it might result from datasets that are inaccurate reflections of the real world.

No matter the root, to address potential issues with bias, enterprises need to understand the components that contribute to unfair AI and how fairness is assessed and addressed such that they can mobilize key stakeholders and decision makers involved in driving AI toward fair application. Indeed, achieving fairness is a shared responsibility that spans the AI lifecycle.

Is your organization’s AI fair and impartial?

© 2022 Trustworthy AI book. All Rights Reserved.

REVIEWS AND ACCLAIM

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. ..

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

DIMENSIONS OF TRUST IN AI

Trustworthy AI results from how people, processes and technologies function together across multiple dimensions of trust. Not every dimension is pertinent for every organization or cognitive tool.These dimensions are explored in depth in Trustworthy AI, by Beena Ammanath.

  • Fair and Impartial
  • Robust and Reliable
  • AI Privacy
  • Safe and Secure
  • Transparent and Explainable AI
  • Accountable and Responsible
Trustworthy AI results from how people, processes and technologies function together across multiple dimensions of trust. Not every dimension is pertinent for every organization or cognitive tool. Rather, the dimensions of trust are lenses for interrogating AI design, function, and outcomes. With regular activities, decisions and documentation across the AI lifecycle, weighing and addressing the dimensions of trust is what permits effective AI governance and unleashes AI’s greatest potential value. These dimensions are explored in depth in Trustworthy AI, by Beena Ammanath.
  • Fair and Impartial
  • Robust and Reliable
  • AI Privacy
  • Safe and Secure
  • Transparent and Explainable AI
  • Accountable and Responsible

AUTHOR BEENA AMMANATH

Beena Ammanath is a global thought leader in AI ethics and an award-winning senior technology executive with extensive global experience in AI and digital transformation. Her work has spanned leadership roles in e-commerce, finance, marketing, telecom, retail, software products, service, and industrial domains. She is the Executive Director of the Global Deloitte AI Institute…

Beena Ammanath is a global thought leader in AI ethics and an award-winning senior technology executive with extensive global experience in AI and digital transformation. Her work has spanned leadership roles in e-commerce, finance, marketing, telecom, retail, software products, service, and industrial domains. She is the Executive Director of the Global Deloitte AI Institute and leads Trustworthy AI and Ethical Technology at Deloitte. Prior to joining Deloitte, Beena served as the CTO for AI at Hewlett Packard Enterprise, where she created a new practice area focused on AI and emerging technologies. Before this, she was the head of Data Science and Innovation at General Electric, working across all GE businesses, including aviation, transportation, healthcare, power, energy and renewables, and oil and gas…

THE NEED FOR TRUSTWORTHY AI

There has arguably never been a more exciting time in AI. Alongside the arrival of so much promise and potential, however, more attention is due to the ethics and trustworthiness of this powerful technology. It is a question of not just what can be done with AI but how it should be done – or whether it should be done at all. Because AI has been developed to this level of maturity, we must now grapple with some of the more complex ethical considerations concerning AI.

What does it mean for AI use to be ethical? How do we know if we can trust the AI tools we use?

The trajectory of AI can be conceived along three streams: research, application, and trust and ethics. Research concerns data science…

ESSENTIAL READING FOR EXECUTIVES

Businesses today are rapidly scaling AI to gain powerful new capabilities and to improve how they operate. Humans and machines are increasingly working together. And this trend exposes businesses to heightened risk of AI behaving in ways that are unethical. Just like their human counterparts in the workforce, AI systems are expected to adhere to social norms and ethics and to make fair decisions in ways that are consistent, transparent, explainable, and unbiased. Of course, figuring out what is ethical and socially acceptable isn’t always easy – even for human workers…

Businesses today are rapidly scaling AI to gain powerful new capabilities and to improve how they operate. Humans and machines are increasingly working together. And this trend exposes businesses to heightened risk of AI behaving in ways that are unethical. Just like their human counterparts in the workforce, AI systems are expected to adhere to social norms and ethics and to make fair decisions in ways that are consistent, transparent, explainable, and unbiased. Of course, figuring out what is ethical and socially acceptable isn’t always easy – even for human workers.

Trustworthy AI offers readers a pragmatic and direct approach to ethics and trust in artificial intelligence. The book presents a straightforward and structured way to think about AI ethics and offers practical guidelines for organizations developing or using AI solutions.