Skip to content

The economic challenges faced by non-profit organizations in implementing artificial intelligence effectively

Utilizing Artificial Intelligence in surgical procedures: Maximizing impact for non-profit organizations in the medical field.

The economic challenges faced by charities and non-profit organizations when adopting artificial...
The economic challenges faced by charities and non-profit organizations when adopting artificial intelligence technology.

The economic challenges faced by non-profit organizations in implementing artificial intelligence effectively

In the realm of artificial intelligence (AI), the non-profit sector is exploring new frontiers. The panel discussion held at The Langham in Melbourne's Southbank recently highlighted this shift, with key figures from the Good Shepherd, AutogenAI, and the University of Melbourne in attendance.

Good Shepherd, the largest provider of financial wellbeing programs in Australia, handles over 100,000 calls for financial assistance annually. The organisation, which manages more than 15 family violence refuges and operates a 24-hour crisis hotline, is now trialling Co-pilot to explore how AI can support its mission.

The University of Melbourne recently co-authored a global AI report titled "The Trust, attitudes and use of artificial intelligence: A global study". The report recommends a 'risk-stratified approach' to AI, classifying AI applications based on their potential for harm. It also suggests four factors for implementing AI: engaging trust, boosting AI literacy, strengthening governance, and transformational leadership.

According to Dr. Gillespie's research, building stakeholder trust requires a problem-led approach to AI, where AI is used to solve a clear, specific issue. This approach is being implemented by Stella Avramopoulos, the CEO of the Good Shepherd, who uses a risk-stratified approach at her organisation.

However, the inherent risk aversion of boards and leaders in the non-profit sector has been a barrier to the adoption of new technologies. Emma Crichton, the APAC CEO of AutogenAI, warned against using AI as a universal solution, stating that "AI for everyone is AI for no one". She emphasised the need for a human to be involved in high-stakes decisions made by AI, particularly in a sector dealing with vulnerable populations.

The complexity of human services work makes it difficult to automate processes in the non-profit sector. Low-risk AI applications can be trialled and adopted quickly, while high-risk uses require caution, additional governance, and a "human in the loop".

Public trust in AI remains low due to concerns about misinformation, job displacement, and data security. The German Red Cross tested the application of a newly developed trust concept for AI in the non-profit sector last week. The discussion emphasized the need for transparency and accountability in AI development and implementation.

The ultimate goal is a synthesis of human expertise and technological capability in the non-profit sector. The panellists advise that non-profits should proactively adopt AI, leveraging their balance sheets to invest in the future. They stress that the goal is not to replace human workers but to augment their capabilities and improve the delivery of services.

In conclusion, the panel discussion at The Langham in Melbourne's Southbank marked a significant step forward in the integration of AI in the non-profit sector. The panelists emphasized the need for a balanced approach, combining the power of AI with the compassion and expertise of human workers. As Good Shepherd continues to trial AI applications like Co-pilot, we can expect to see more non-profit organisations following suit, harnessing the potential of AI to improve the lives of those they serve.

Read also: