So, you’re using AI ‘responsibly’, but, asks Cherian Koshy, are you using it ethically? The two are not the same thing.

I vividly remember watching the space shuttle Challenger break apart just moments after launch. Even as a kid, I knew that this was an important moment, especially when President Reagan addressed the nation. In the years since, as I’ve built my career in the nonprofit sector, I’ve often found myself thinking about the hard lessons of history and that tragedy – the dangers of unchecked ambition, the importance of critical communication, and the need to balance innovation with our ethical obligations as a sector. 

I use that word – ‘ethical’ – intentionally and with priority. Far too often, we dispense with ethics because it seems too obscure or fluffy. However, as Dr. Reid Blackman warns In an article in the Harvard Business Review (HBR):

“This behavior – of replacing the word ‘ethics’ with some other, less precise term – is widespread. Ethical challenges don’t disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.”

As you scroll through LinkedIn, countless articles tout the ‘responsible’ use of AI, each claiming to guide us toward better practices. At conferences, panels buzz with emerging experts advocating for responsibility in AI deployment. Yet, amid all these discussions, a critical topic often goes missing – the ‘ethical’ use of AI. The AFP Code of Ethical Standards and the sector’s global commitment to ethical practices seem disregarded in favour of more superficial and individual notions of responsibility. The focus leaves the deeper, more complex ethical considerations largely unspoken.

Just as the Challenger team faced immense pressure to launch despite really clear warnings, our sector is racing to experiment with tools to ensure that no nonprofit is left behind in the AI arms race.

In his HBR article, Dr Blackman identifies three problems with a shift from ethical AI to responsible AI. 

Dr Reid Blackman has identified three problems that result when organisations focus on the ‘responsible’ use of AI but forget about its ethics.

First, it permits us, as nonprofit professionals, to focus on those things that we are already experts on, such as fundraising or programme delivery, and deprioritise talk about ethics, an area where few of us consider ourselves experts. 

Second, it enables us to remain on the surface of the intended effects of our tech stack and its uses, rather than dig further into unintended side effects of our actions or outputs. 

Finally, the vagueness inherent in responsibility allows us to commit to vapid principles, like ‘transparency’ or ‘privacy’, especially when there is no assessment or accountability when the technology fails. 

For the nonprofit sector – which, for better or worse, is held to a different and higher standard – compliance and regulation needs to exceed the table stakes of ‘do no harm’, and boards should not assume that the absence of a regulatory violation means that everything is going well. The speed of AI deployment poses unique risks to nonprofits. 

At conferences, panels buzz with emerging experts advocating for responsibility in AI deployment. Yet, amid all these discussions, a critical topic often goes missing – the ‘ethical’ use of AI.

It took 68 years for airplanes to reach 50 million customers and ChatGPT less than 68 days to reach 100 million users. Artificial intelligence is certainly transforming the work we do but is doing so at a pace that we’ve never seen before. Historian Dr. Margaret O’Mara (also writing in the HBR) agrees, saying that a “deeply ingrained need for speed and growth may hinder efforts to put adequate guardrails in place”; while Eric Schmidt, former Google CEO, said on This Week with George Stephanopoulos, that, in his 50 year career, “I’ve never seen something happen as fast as this”. 

Former Google CEO Eric Schmidt.

He continues: “However, in my view, the greatest danger isn’t the technology – it’s the ethos and business imperatives that have for so long defined the people building it. In the world of tech, speed is nothing new. But generative AI systems are rocketing ahead so quickly and so powerfully that even the most seasoned observers are taken aback. Only time will tell how much we’ll break if we move this fast.”

Just as the Challenger team faced immense pressure to launch despite really clear warnings about the O-ring at those temperatures, our sector is racing to experiment with tools to ensure that no nonprofit is left behind in the AI arms race.

With most technologies, the pace of change, adoption and mass implementation give a chance for practitioners and politicians alike to establish self-regulation through best practices and compliance regimes that prevent misuse. AI is different and the world wide web has become the wild, wild, west. Dr. Dan Wadhwani – a professor of clinical entrepreneurship at University of Southern California – confirms that “the pace at which AI operates and the lack of real-time visibility into the processes by which AI outputs are generated pose an entirely new kind of challenge”.

Navigating the rapid pace of AI: Embracing ethical literacy and language

As the nonprofit sector strives to build ethical literacy and scale professional development, the rapid pace of AI technology poses significant challenges. Our focus on ‘responsible’ AI often overshadows the critical need for ethical considerations and the language of ethics. 

Despite some laws and regulations currently exempting nonprofit organisations, it seems only a matter of time before our sector is fully brought within regulatory regimes. Embracing and integrating the language of ethics in our AI practices is essential to navigate this evolving landscape responsibly and ethically. From the EU’s recently-adopted sprawling AI Act, to New York City’s bias in employment audit law, the message is clear: we must approach AI not just as a technical challenge, but as a profoundly ethical one. 

As we race to unlock the seemingly unlimited potential of this transformative technology, we must do so with our eyes wide open and our values held close. We must be willing to ask tough questions, to pause and reflect, to change course when necessary.

If any nonprofit organisations are the subject of ethical violations, it adversely impacts generosity across the entire sector. Trust is earned in drops and lost in buckets, but uniquely for our sector, any of us can empty another’s bucket. Donors may paint nonprofits with the same brush based on ethical violations, perceived or real. The Edleman Trust Index indicates that nonprofits continue to fall behind business and governments, and this declining trust may be amplified by AI usage without governance policies. 

In the months and years that followed the Challenger’s final flight, and the report of the commission set up to investigate the disaster, one powerful truth emerged: when incentives are misaligned, ethical principles and safeguards get short shrift. 

The space shuttle Challenger blasts off on its last ill-fated mission, despite clear warnings about a potentially catastrophic flaw in its design.

As we race to unlock the seemingly unlimited potential of this transformative technology, we must do so with our eyes wide open and our values held close. We must be willing to ask tough questions, to pause and reflect, to change course when necessary. We must prioritise transparency, accountability, and the active participation of the communities we serve.

Practically, organisations should prepare in advance for what potential regulation might require so that these requirements are addressed proactively. As artificial intelligence becomes embedded within most of our daily drivers – such as CRMs, word processing tools, websites, and much more – it will become the responsibility of nonprofit organisations to remain compliant. To do that, I recommend they address the following five things.

AI governance policies –  Private companies admit that while most of their staff are using AI, fewer than 21 per cent have AI policies. Starting with governance policies that require thoughtful discussion, stakeholder engagement, and accountability is the first step to a structured approach to preparing for experimentation and implementation.

Data management and security – Many discussions about transparency and accuracy have taken up the bulk of the conversations. However, one of the primary issues that needs to be addressed, especially in the nonprofit sector, is that data may not be in the places we expect them to be, such as personal identifiable information held in incorrect locations. 

Training and awareness – Beginning with deep ethical literacy training for the entire paid and unpaid staff is an essential feature of all compliance. Internally, it is important that everyone fully understands the ethical implications of the work they are doing and what failure entails. Externally, preventing malicious attacks that leverage AI is a novel concern that every nonprofit should prepare for today. 

Audit preparation – While audits are not required of artificial intelligence tools at this point, preserving an audit-ready trail from inputs to outputs to uses in decision-making will benefit any organization that adopts AI tools whether we are required to or not. 

External review – While accountability is essential, self-regulation often takes the form of internal procedures that may or may not be robust. With the adoption of AI, engaging stakeholder communities proactively can enhance donor, volunteer, and community trust in an organization. Consider creating a technology advisory council to review and recommend tools, strategies, and evaluation of uses. 

None of this will be easy. The pressures to move fast and break things, to chase the next shiny AI innovation, are immense. But as nonprofit leaders, we have a sacred obligation to rise above those pressures and stay true to our missions and moral compass.

I’m hopeful that we are up to the challenge. In my work with frontline staff, nonprofit partners, and the broader communities we serve, I see so much wisdom, creativity, and passion for harnessing technology to build a better world. If we can channel that energy – while staying grounded in the hard lessons of the past – and maintain a steadfast focus on ethics and ethical language, I believe we can chart a bold new course for AI in the nonprofit sector, one that truly puts people and purpose first.

  • Cherian Koshy is vice president of product strategy at Kindsight, and leader of Rogare’s Ethics of using of AI in Fundraising project.
  • This project has identified a research agenda to explore the ethical implications of using AI in fundraising. The next phase of this project will be to explore the second-order effects of the implementation of AI in fundraising practice, including knowledge loss, employment displacement, and the topic Cherian discusses in this blog – ethical versus responsible use of AI.



Source link

Share.
Leave A Reply

© 2024 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version