AI sure is moving faster than the rules that govern it, and the tech sector needs to do something about it. Companies want the commercial gains that AI promises, yet every fault brings the risks of reputational damage, regulatory penalties and loss of trust. This in turn is bringing in a demand for formal training on how exactly these tools are to be used responsibly.
Compliance training provider Skillcast has reported a 250% increase in interest in its Responsible Use of Artificial Intelligence course over the last month. The company said employees want practical guidance on how to apply AI at work without crossing ethical or legal lines.
Corporate boards face pressure to unlock commercial gains from AI without breaching fast changing ethical and regulatory rules. According to Skillcast, UK Google searches for ethics of artificial intelligence have gone up 25% YoY which could indicate a turn towards practical workplace policies over just theory.
Recent operational issues added urgency. In December, outages at Amazon Web Services were associated in at least two cases with its internal AI systems. The incidents fed into debate about the limits of autonomous technology and the need for closer human oversight.
Vivek Dodd, CEO of Skillcast, said businesses are rethinking their stance. “The honeymoon phase of AI is over, and we are seeing a fundamental shift in how businesses are viewing this latest development. The initial excitement around its commercial potential is now being dampened by the very real risks it poses when operating outside of clear and correct rules and regulations.”
He added that awareness alone does not work. “For HR and compliance leaders, simply having a general awareness of AI is no longer enough.The discourse surrounding Amazon’s internal use of AI agents serves as a significant case study, highlighting why companies must move toward a deeper understanding of oversight and accountability. Teams must be empowered to understand how it works fully, feel confident navigating and weeding through a landscape of misinformation, and be able to identify the correct regulations and the framework their business workforces’ AI applications should operate in.”
What Are Companies Teaching Their Staff?
Skillcast’s Responsible Use of Artificial Intelligence course runs for 35 minutes and targets all staff through its Compliance Essentials Library. The training sets out to help employees understand ethical considerations, follow guidelines and promote fairness in AI use. The company says misuse can lead to reputational damage, regulatory penalties and loss of trust.
The course covers basic definitions and practical risk management. Employees learn to identify different forms of AI, limit the use of certain systems to meet regulatory rules, and acknowledge concerns arising from the global regulatory AI landscape. It also sets out responsibilities and best practice under relevant AI regulations.
Practical scenarios are central to the programme. Modules ask learners to tell the difference between real and AI generated voices and images, compare narrow and generative AI tools, and assess news examples. Risk sections cover AI crime, data privacy and security, copyright and plagiarism, misinformation, and discrimination and bias. Staff work through situations such as a chatbot failure, a fake urgent video message, and an AI tool used incorrectly under deadline pressure.
The company says learners will discover how to identify risks, address biases and use AI responsibly to maintain trust and integrity. The goal is to build regulatory resilience inside organisations that want to innovate without crossing legal or ethical lines.
How Is The Tech Sector Responding To All The Backlash?
Large tech companies continue to roll out AI systems across cloud platforms and internal operations. The incidents at Amazon Web Services show how closely watched these deployments have become. When systems go wrong, questions about oversight and accountability follow quickly.
Skillcast’s data indicates that professionals are not waiting for regulators alone to set the pace. The 250% surge in training interest and the 25% rise in UK searches point to demand for guidance that fits day to day decisions. Employees are expected to understand generative and agentic systems, keep up with regulatory changes and recognise industry risks as AI tools evolve.
Dodd said practical ethics should guide corporate thinking. “By focusing on practical workplace ethics, companies can build the regulatory resilience they need to innovate confidently, navigating a challenging and ever-changing landscape.”




