X
Innovation

How Deloitte navigates ethics in the AI-driven workforce: Involve everyone

In our exclusive interview, Deloitte's Beena Ammanath stresses the need for an ethical framework to handle the complexities of this emerging technology in workforce development. C-suite and boardroom action is essential.
Written by David Gewirtz, Senior Contributing Editor
beena

Beena Ammanath is the head of Deloitte's Technology Trust Ethics practice.

Deloitte

In 1845, William Welch Deloitte -- hired to audit the Great Western Railway -- founded the company that still bears his name. Deloitte Touche Tohmatsu Limited has grown from a one-man shop to a "Big Four" accounting firm employing hundreds of thousands of people today. By most measures, Deloitte is now the world's largest accounting firm.

As head of Deloitte's Technology Trust Ethics practice, Beena Ammanath deals daily with challenges that William Welch Deloitte could not have imagined 178 years ago. In addition to guiding Deloitte's perspective on AI ethics both internally and externally, Ammanath operationalizes AI ethics from an enterprise perspective and leads the Global Deloitte AI Institute. Her team recently completed their second State of Ethics and Trust in Technology study, which looks at ethics in technology across enterprises.

Also: Do companies have ethical guidelines for AI use? 56% of professionals are unsure, survey says

One big trend we've been following for some time -- and one that has accelerated in this generative AI era -- is the ever-growing role of chief ethics officer. Organizations adopting emerging technology face new challenges arising from the complexity of the software, the pace of change, and the need to respect the diversity of their employees, customers, and constituents.

But, as Ammanath explains below, responsibility for ethical performance now involves the entire executive suite, including boards of directors. While brand growth is important, the need to protect brand reputation and marketplace trust is driving the added attention that ethics is getting at the top levels of business.

I had the opportunity to interview Ammanath, and our fascinating and wide-ranging conversation touched on the company's efforts to define ethical AI in the context of workforce development as well as how the results of the study have informed understanding of the AI ethics issues and practices of companies around the world.

Let's dig in.

Also: Mind the trust gap: Data concerns prompt customer caution over generative AI

ZDNET: How does Deloitte define ethical AI in the context of workforce development?

Beena Ammanath: With advanced technologies like AI, there is immense potential for positive impact -- but there is simultaneously a risk of unintended outcomes.

Organizations should be confident the tools they use behave ethically, can be trusted to protect the privacy, safety, and equitable treatment of their users, and are aligned with their purpose and values. This is where AI ethics guidelines come into play, helping organizations responsibly leverage this technology while harnessing its benefits.

Also: Does your business need a chief AI officer?

Introducing these guidelines to the workforce is a critical step when adopting AI into an organization. We saw from our research that leaders are utilizing a variety of strategies to educate their workforces on AI ethics and prepare them to use AI tools, from educating workers on trustworthy best practices to upskilling and hiring for specific AI ethics roles.

ZDNET: What strategies do you recommend for incorporating ethical considerations into AI development?

BA: The approach to developing an ethical framework for AI development and application will be unique for each organization. They will need to determine their use cases for AI as well as the specific guardrails, policies, and practices needed to make sure that they achieve their desired outcome while also safeguarding trust and privacy.

Establishing these ethical guidelines -- and understanding the risks of operating without them -- can be very complex. The process requires knowledge and expertise across a wide range of disciplines. In our study, we saw a trend toward hiring for specialized roles, such as AI ethics researcher (53%), compliance specialist (53%), and technology policy analyst (51%).

Also: The ethics of generative AI: How we can harness this powerful technology

On a broader level, publishing clear ethics policies and guidelines, and providing workshops and trainings on AI ethics, were ranked in our survey as some of the most effective ways to communicate AI ethics to the workforce, and thereby ensure that AI projects are conducted with ethics in mind.

ZDNET: What role does leadership play in fostering an ethical AI culture?

BA: Leadership plays a crucial role in underscoring the importance of AI ethics, determining the resources and experience needed to establish the ethics policies for an organization, and ensuring that these principles are rolled out.

This was one reason we explored the topic of AI ethics from the C-suite perspective. We are seeing that leaders are taking a proactive approach towards preparing their workforce for this technology.

Also: Do companies have ethical guidelines for AI use? 56% of professionals are unsure 

Chief ethics officers and trust officers have become a mainstay for organizations looking to ethically adopt emerging technology, but it is not their responsibility alone. The dialogue often involves the entire executive level including the board of directors. Our study also revealed executives see a strong connection between AI ethics and important business tenants like revenue growth, brand reputation, and marketplace trust.

It is also important to note that the effort to introduce AI ethics and policies is happening across organizations, at all levels – from the C-suite and the boardroom to specialists and individual employees.

ZDNET: How do you ensure that ethical AI principles are applied consistently across global teams?

BA: This is an example of why establishing clear, consistent ethics guidelines is critical – and likely why it was ranked as the most effective way to communicate AI ethics to employees, according to our survey.

We are also seeing conversations around AI ethics happening among leaders at the board level, which would ensure consistent, organization-wide policies and actions. According to the study, over half of respondents said boards of directors (52%) and chief ethics officers (52%) are always involved in creating policies and guidelines for the ethical use of AI.

ZDNET: How do you see the future of work changing with the adoption of ethical AI?

BA: The increasing emphasis on AI ethics is a good sign the integration of AI in the world of work is moving in a positive direction – the widespread emphasis on the value of establishing and enforcing ethical principles means workforces will be able to harness its capabilities more effectively and efficiently.

Also: Everyone wants responsible AI, but few people are doing anything about it

For workers, this also means many will be encouraged and empowered to increase their knowledge of the technology. Our survey showed approximately 45% of organizations are actively training and upskilling their workforce in AI.

ZDNET: What are the unique challenges of upskilling employees for ethical AI?

BA: Upskilling is an important part of AI adoption, but it can look very different for workers with different skills and experience. It involves understanding employees' current capabilities and skills, and seeing how those skills could be enhanced or built upon with AI.

Understanding how workers' unique abilities interact with and can be empowered by the technology, especially for large, diverse organizations, can be challenging.

ZDNET: What measures can be taken to enhance transparency in AI decision-making processes?

BA: Communication is key here – the increased focus on AI ethics has sparked conversations at the leadership level about not only establishing ethical principles, but also effectively communicating them. While a lot of the decision-making is happening at the executive or board level, it is important for these decisions to involve specialists across various disciplines, and that they are effectively communicated across the organization.

ZDNET: What steps should companies take to ensure their AI applications are responsible and trustworthy?

BA: When developing or initiating new AI applications, it is important to have a framework to direct the responsible design and operation of these applications. Our survey revealed that many executives are aware of the need for these guidelines, and they are prioritizing creating and communicating AI ethics policies ahead of activating new AI pilot projects.

Also: 4 ways AI is contributing to bias in the workplace

ZDNET: How does Deloitte assess the ethical implications of new AI technologies? How does Deloitte address the ethical concerns surrounding AI and privacy? What ethical guidelines does Deloitte follow when developing AI solutions?

BA: Deloitte's Trustworthy AI framework outlines seven dimensions through which to evaluate AI behavior:

  1. Transparency and explainability
  2. Fairness and impartiality
  3. Robustness and reliability
  4. Safety and security
  5. Responsibility
  6. Accountability
  7. Respect for privacy

Deloitte's Technology Trust Ethics practice also produced a framework to guide responsible decision-making in the design, operation, and governance of all emerging technologies.

In addition to developing guiding frameworks, Deloitte's Technology Trust Ethics team released a first-of-its-kind foundational training in 2023 to help employees develop an ethical tech mindset, recognize practical actions anyone can take to spot and mitigate ethical risks with technologies, and understand the growing need to develop and handle technology responsibly and in a trustworthy manner at Deloitte.

Also: AI is supercharging collaboration between developers and business users

ZDNET: How can AI ethics be integrated into the core values of a company? What is the significance of ethical AI in maintaining consumer trust?

BA: The use of AI tools across organizations is already near-ubiquitous – the majority (84%) of C-level executives surveyed report their organizations are currently using AI. An additional 12% of executives indicate they will explore AI use cases over the next year.

This means organizations need to focus on making sure that the use of these tools – and any emerging technologies – aligns with the organization's core values, ethics, and expectations.

Having a clear understanding of the technology's intended use for their organization and the desired outcome is paramount to establishing the needed policies to ensure those results.

There is a strong tie between brand trust and reliability and ethics policies around AI. In our survey, 47% of executives believe that ethical guidelines for emerging technologies like generative AI are crucial for brand reputation and marketplace trust.

Also: AI is transforming organizations everywhere. How these 6 companies are leading the way

ZDNET: How can organizations ensure their AI systems are inclusive and equitable?

BA: Creating and utilizing AI tools that are equitable and inclusive is an ongoing process that has to evolve along with the technology. This means involving researchers and specialists who can develop processes and systems to address issues like bias.

It also requires the involvement of diverse perspectives in AI development, to make sure the technology is designed and operated inclusively. Inclusivity and equity should be folded into an organization's framework for AI ethics as a key facet to address when developing or leveraging AI.

Final thoughts

ZDNET's editors and I would like to give a huge shoutout to Beena Ammanath for taking the time to engage in this in-depth interview. There's a lot of food for thought here. Thank you, Beena!

Also: The demand for hybrid work is only growing, according to a new Deloitte report

What do you think? Did her recommendations give you any ideas about how to improve problems of bias and diversity in your organization? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards