• Generic concerns about the ethical use of AI can’t be overlain on to fundraising, since its use in fundraising throws up ethical dilemmas specific to this application
  • AI currently doesn’t have access to sufficiently-sophisticated thinking about ethics to be able to tackle ethical dilemmas in fundraising
  • Ethical and data literacy across the fundraising profession must be upskilled to ensure the most rigorous human oversight of the use of AI in fundraising.

Artificial intelligence (AI) offers exciting opportunities for charities and nonprofits, from automating administrative tasks to gaining insight from data. However, as a new report from the international fundraising think tank Rogare cautions, integrating AI requires navigating complex ethical considerations unique to the fundraising sector. 

The report – Artificial intelligence and fundraising ethics: A research agenda – has been put together by a multinational project team lead by American fundraising consultant Cherian Koshy, a leading thinker in both the use of AI and fundraising ethics, and a member of Rogare’s Critical Fundraising Network. 

The report examines ethical issues arising both from applying AI in fundraising, and using AI to resolve fundraising dilemmas. It starts from the assumption that generic concerns and guidance about the use of AI in any context can’t simply be transferred to and overlain on to fundraising – because its use in fundraising will throw up unique ethical challenges.

To address these challenges, the Rogare report presents a 10-point research agenda tailored to the use of AI in fundraising. It calls for gathering stakeholder perspectives, auditing data and algorithms, developing ethical frameworks, and assessing oversight mechanisms.

Two overarching themes emerge from the project group’s deliberations.

The first is that AI does not currently have access to sufficiently-sophisticated knowledge of the ethics of fundraising to be able to make ethical decisions. 

But it can be used to guide fundraisers through the process of making ethical decisions, such as priming them about what questions to ask, as might be the case in gift acceptance/refusal dilemmas. 

The second emergent theme is that because AI lacks sufficient knowledge of fundraising ethics, human oversight is needed to ensure any use of AI in fundraising practice is done ethically and in accordance with best practice and regulatory codes. 

Cherian Koshy says:

“Not only does this oversight require a high degree of ethical literacy on the part of human fundraisers, it also requires a high degree of data literacy.

“However, it is questionable whether both the ethics and data skills, knowledge and competencies exist to the required degree across the entirety of the fundraising workforce that will be tasked with oversight of the use of AI in fundraising.

“As AI enters and becomes widespread in fundraising practice, we must upskill the human overseers with this knowledge and these competencies. Skilled and knowledgeable human oversight of AI in fundraising is absolutely essential.”

Other issues considered in the report and addressed in the research agenda include:

  • The need to balance transparency around AI with potential negative impacts on donations. Supporters increasingly expect clarity on whether they are interacting with a bot or human. But revealing the use of AI could decrease engagement or giving. More research is required to navigate this tension.
  • Data ethics and potential biases are also examined. Relying on flawed data risks amplifying discrimination through automated decisions. Thorough analysis should scrutinise existing fundraising data sets and AI systems for embedded biases.
  • Maintaining an inclusive sector is a priority. Cost barriers could concentrate AI capabilities among larger nonprofits. The report team advocates for “shared data infrastructure and open standards” to ensure equal access for smaller organisations.  
  • The report cautions against over-reliance on AI that could cause a “loss of fundraising expertise through deskilling”.
  • Unique intellectual property issues require clarification as AI enters fundraising, such as who owns outputs like synthetic media. And determining accountability for potential harms from opaque AI systems presents challenges.
  • As charities increasingly adopt AI, maintaining public trust will require transparency, assessing workforce impacts, and developing governance aligned with supporter expectations and sector values. With careful oversight, AI presents opportunities to advance nonprofit missions. But as this report emphasises, we must proactively address the ethical dimensions.

The project team comprised:

  • Cherian Koshy (project leader) – iWave (USA)
  • Stuart Chell – Chell Perkins (UK)
  • Jess Crombie – University of the Arts London (UK)
  • Meena Das – NamasteData (Canada)
  • Scott Decksheimer – Avista Philanthropy (Canada)
  • Alice Ferris – GoalBusters (USA)
  • Lisette Gelinas – Impact and Main Inc/ST (Stephen Thomas Ltd) (Canada)
  • Ian MacQuillin – Rogare – The Fundraising Think Tank (UK)
  • Damian O’Broin – Ask Direct (Ireland).

As with all Rogare’s work we are able to conduct this project and make it freely-available to fundraiser thanks to the support of our Associate Members: Ask Direct, GoalBusters, Giving Architects, ST (Stephen Thomas Ltd), and Bluefrog.



Source link

Share.
Leave A Reply

Exit mobile version