A recent survey of 500 US-based business leaders by Conversica reveals a worrying disconnect between leaders recognising the importance of ethical AI guidelines and the actions they take. At the same time, the World Economic Forum (WEF) is asking business leaders to do more to ensure they are using AI responsibly.
Research Findings: A Gap Between Awareness and Action
The Conversica survey highlights a concerning trend: despite a majority of respondents (73%) acknowledging the importance of ethical AI guidelines, a mere 6% have actually established them within their organisations. This discrepancy is particularly alarming since 65% of companies are projected to have implemented AI-powered services within the next year.
Key concerns identified by business leaders regarding AI usage include:
- Accuracy of models: 20% of leaders whose companies are already using AI were concerned about lack of accuracy
- Lack of transparency: A significant 22% of respondents from companies already using AI expressed concerns about the opacity of AI systems and their decision-making processes
- Legal implications and intellectual property issues: These were the top concerns for companies with no plans to adopt AI in the coming year.
This disconnect between awareness and action suggests that many businesses are adopting a reactive approach to AI governance, prioritising implementation over planning. Such an approach is fraught with risks, potentially leading to security breaches, regulatory violations, negative user experiences and damage to brand reputation.
Too many leaders are not aware of the challenges using AI from third-party suppliers can bring such as security, transparency and ethical decisions. This knowledge gap can be attributed to AI providers not making this information easily accessible, hindering informed decision-making by customers.
The WEF Perspective: Corporate Integrity as the Cornerstone of Ethical AI
The WEF wants to see corporate integrity shaping the future of AI. They emphasise that ethical AI extends beyond mere compliance with regulations; it necessitates a proactive commitment to ensuring that AI technology serves the common good and aligns with shared human values.
In their recent article, the WEF urges business leaders to take proactive steps to ensure responsible AI development. They advocate for the creation of robust AI integrity ecosystems within companies, complete with ethical guidelines, due diligence processes and dedicated oversight bodies. Furthermore, the WEF emphasises the importance of empowering boards and investors to actively engage in discussions about AI ethics, integrating these considerations into strategic decision-making and risk assessments. To foster trust and encourage wider adoption of responsible AI practices, the WEF calls for transparency and accountability, suggesting companies should openly communicate their AI usage, disclose ethical guidelines, and establish clear mechanisms for accountability.
Can Business Leaders Bridge the Gap?
The insights from the Conversica survey and the WEF’s recommendations offer a clear – if challenging – roadmap for leaders to navigate the ethical challenges of AI integration. We know from our clients that it’s far from easy to get good AI governance in place so if you have a way to go to make your controls on AI comprehensive consider the following:
- Prioritise proactive AI governance: Develop comprehensive ethical guidelines for AI usage before adopting any solutions. This proactive approach mitigates potential risks and ensures that ethical considerations are embedded from the outset.
- Conduct thorough due diligence on AI vendors: Evaluate the security measures, transparency guidelines and ethical parameters of potential AI providers. Demand clear explanations and documentation to ensure alignment with your organisation’s values and ethical standards.
- Foster a culture of AI awareness and responsibility: Educate employees about the ethical implications of AI and establish clear policies regarding its usage within the organisation. Encourage ongoing dialogue and feedback to ensure that AI integration is aligned with human values and societal well-being.
- Embrace transparency and stakeholder engagement: Communicate openly about your organisation’s AI principles, guidelines and practices. Actively engage with stakeholders, including employees, customers and regulators, to foster collaborative solutions.
- Champion the development of ethical AI standards: Support initiatives aimed at establishing industry-wide ethical guidelines and standards for AI development and deployment.
The journey toward ethical AI requires a collective effort and we all need to “do better”. By embracing these recommendations and actively engaging in the dialogue surrounding responsible AI, business leaders can play a pivotal role in shaping a future where AI technology serves humanity and contributes to a more equitable and sustainable world.