Safer Internet Day takes place today and public discussion often turns to safer passwords or avoiding risky links. Enterprise leaders speak more about the safety of the systems that keep services running every hour of the day.
Petre Agenbag, service delivery manager at Dariel, said online safety now ties directly to how businesses operate day to day. “The internet your customers experience is only as safe as the systems behind it. When those systems fail, the impact is immediate, visible, and very difficult to contain,” he said.
Many organisations run sales, payments, customer service and reporting through connected platforms. A fault does not need a cyber attack to cause harm. A failed integration or poor configuration can stop services and hit income. Agenbag said these events often stay out of headlines but harm trust and show how prepared a business really is.
What Does Resilience Mean In An Always On Economy?
Online safety discussions often circle around hacking and data theft. Safer Internet Day also raises questions about recovery when technology breaks. Agenbag explained that security, availability and reliability now depend on each other. “An unstable platform is harder to secure. A compromised platform is instantly unstable,” he said.
Enterprises also carry responsibility beyond their own walls. When systems go down, customers cannot pay, partners cannot connect and staff cannot work. Agenbag said reliability affects whole networks of organisations. “Operational reliability is a responsibility to your entire ecosystem. When your systems fail, other businesses feel it too,” he said.
Preparation often proves more effective than fast reactions. Agenbag described safety as steady work done long before a problem appears. “True safety comes from boring, disciplined work. Monitoring before incidents, runbooks before outages, and clarity on ownership before something breaks,” he said.
How Is AI Being Used To Protect Vulnerable Users Online?
Mental health safety also shapes Safer Internet Day in 2026. Suicide prevention charity Ripple Suicide Prevention has unveiled an AI system that blocks harmful self harm and suicide content before users see it.
The charity said the system scans websites, forums and social media across the internet. This strengthens Ripple’s BrowserShield tool, which works inside a user’s browser and does not collect personal data. Ripple said privacy stays intact because no browsing history or search terms leave the device.
Ripple was founded in 2021 by Alice Hendy MBE after the death of her brother Josh. The technology now protects 1.9 million active users across 50 countries and has intercepted more than 100,000 harmful searches. Thirty two people have confirmed that the intervention stopped them taking their own lives, according to the charity.
David Savage, chief technology officer at Ripple Suicide Prevention, said the system outperforms standard tools. “Even now, before full capacity is achieved, the Ripple BrowserShield identifies harmful searches 230% more effectively than the mainstream search engines, offering a comprehensive protection to individuals searching for ways to self harm or take their own lives,” he said.
Agenbag said Safer Internet Day should prompt honest questions inside organisations. “Being safer on the internet isn’t about hoping nothing goes wrong. It’s about building systems and organisations that are ready when it does.”
Experts Speak On Safer Internet Day
Our Experts:
- Bartosz Skwarczek, Founder, G2A.COM
- Kristel Kruustük, Founder, Testlio
- Marc Rubbinaccio, VP of Information Security, Secureframe
- Shrav Mehta, Founder and CEO, Secureframe
- Éireann Leverett, FIRST Liaison and Lead Member, FIRST’s Vulnerability Forecasting Team
- Chris Gibson, CEO, FIRST
- Ionut Mihai Chelalau, FIRST Transportation & Mobility SIG Chair and Cybersecurity Consultant, Diconium
- Trey Darley, Standards SIG and Time Security SIG Lead at FIRST and Founder, Proper Tools
- Hadyn Green, Principal Communications Advisor, FIRST
Bartosz Skwarczek, Founder, G2A.COM
![]()
“Trust is the currency of digital commerce. At G2A.COM, safety is not a feature we add on, it is the foundation we build on.
“In an environment where innovation accelerates daily, our responsibility is to stay ahead of risk, not limiting ourselves to just reacting to it. We treat security as a core product capability. That means layered defences, advanced threat modeling, rigorous seller verification, secure payment protections, strong account safeguards, and a dedicated Trust & Safety function overseeing high-risk activity. Secure-by-design is embedded into our platform architecture, so protection is proactive, continuous, and scalable.
“AI is reshaping both opportunity and threat. Deepfakes, synthetic identities, and AI-powered social engineering are raising the bar for everyone in our industry. Our approach is simple: assume deception is possible and verify at every step. We invest in AI-driven anomaly detection, stronger verification for sensitive actions, fast impersonation takedowns, and ongoing education for both our teams and users. We also actively collaborate across the ecosystem to strengthen standards around authenticity and accountability online.
“Security is a shared responsibility. Technology, policy, and user awareness must work together. Enabling multi-factor authentication, protecting credentials, staying within official channels, and reporting suspicious activity are small actions that collectively reinforce security.
“Safer Internet Day is a reminder, but for us, this commitment is constant. As threats evolve, so will our investment in security, privacy, and responsible technology solutions. Our goal is clear: to protect trust at scale and ensure every transaction on G2A.COM is backed by resilience, transparency, and leadership.”
Kristel Kruustük, Founder, Testlio
![]()
Adopt a “double verification” mindset for everything AI tells you
“We’ve entered an era where verifying AI-generated outputs is table stakes now. I’ve reached a point where I fact-check nearly everything AI tells me: the sources, the quotes, the statistics. When I ask any AI chatbot like ChatGPT or Perplexity to give me sources, I’m checking if those sources are actually real. When I ask for quotes, I’m Googling to confirm they exist. Sometimes they don’t.
This matters for personal safety because AI models are also known to be a people-pleaser. That means if you feed them incorrect assumptions or leading questions, they’ll reinforce misinformation rather than correct it. Double verification protects you from acting on fabricated information, whether that’s a fake statistic you’re about to share at work or a “source” that doesn’t exist.
Rule: if it affects money, reputation, health, or security, verify with a second, primary source.”
Treat AI like a starting point, not a final answer
“The industry is moving so fast that it’s exciting and scary at the same time. We’ve already seen AI-generated material show up in legal cases and filings, with real consequences when no one verifies it.” Courts, employers, and everyday users are all grappling with a question that didn’t exist a few years ago: Is what I’m seeing actually true?
If you’re using AI for anything involving your finances, health, career, or personal data, assume the output needs a human gut-check before you act on it. AI is a powerful tool, but it works best when curious, skeptical, and engaged humans stay in the driver’s seat. The moment you stop questioning what AI tells you is the moment you put yourself at risk.”
Marc Rubbinaccio, VP of Information Security, Secureframe
![]()
On Identity as the New Attack Surface:
“Attackers have figured out that compromising identity is easier than directly hacking the software itself. Stolen credentials, hijacked sessions, and abused API tokens are becoming a reliable way to gain access to systems and exfiltrate data. For companies built on cloud infrastructure and third-party integrations, a single compromised service account or API key can give attackers direct access to sensitive data as if they were to compromise a user account..
The mindset organisations need to have in 2026 is treating every login, token, and OAuth grant as a potential attack vector. Short-lived credentials, least-privilege access, and continuous monitoring are required controls when protecting customer data when managing a modern application.”
On AI-Powered Social Engineering:
“Phishing is already becoming superpowered through the use of AI. In 2026, we’ll see AI-powered social engineering attacks that are nearly indistinguishable from legitimate communications. With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deepfake videos representing people they are not.
The next wave of defense will require specific training related to the new techniques attackers are using as well as technology improvements such as behavior-based detection and real-time identity verification.”
Shrav Mehta, Founder and CEO, Secureframe
![]()
On Lessons from 2025’s Biggest Breaches:
“The biggest breaches of 2025 came from preventable failures: reused passwords, unmonitored vendor access, and data that should never have been collected in the first place. When 16 billion credentials leak in a single event, it’s a wake-up call that the fundamentals still matter most.
Organisations need to ask themselves a hard question: if you don’t need to store certain customer data, why are you collecting it? Data minimisation isn’t just good privacy hygiene, it’s risk reduction,”
On the AI Security Paradox:
“93% of companies say security is a top priority, yet 68% leave one or fewer full-time employees to handle compliance while AI-powered attacks surge. Teams are spending eight-plus hours a week on paperwork instead of protecting customer data, and manual compliance models are breaking down when the stakes are highest.
The gap between urgency and capacity is creating real business consequences, from lost deals to increased risk exposure. Organisations can no longer afford to treat security as a shared side responsibility.”
Éireann Leverett, FIRST Liaison and Lead Member of FIRST’s Vulnerability Forecasting Team
![]()
On Vulnerability Forecasting
“We’re forecasting nearly 60,000 new vulnerabilities in 2026, and it’s entirely possible we will hit 70,000 to 100,000. Every one of those is a potential doorway to your organisation’s sensitive data, and no single security team can patch them all. The question organisations need to ask right now is: are my people and processes ready to handle this volume, and am I prioritising the vulnerabilities that actually put my data at risk? Forecasting lets defenders stop reacting to every new CVE and start making strategic decisions about where to focus limited resources before attackers exploit the gaps.”
Chris Gibson, CEO, FIRST
![]()
On Organisational Resilience
“Too many organisations treat a breach as ‘resolved’ the moment systems come back online, but failing to fully cleanse systems and validate what data was stolen leaves attackers with persistent access for months or years. The fundamentals of protecting sensitive data still matter most: segmenting networks, enforcing multi-factor authentication, and ruthlessly retiring old credentials before they become backdoors. But here’s what most organisations miss: no company can solve data breaches and cybersecurity in isolation. The organisations that recover fastest are the ones with trusted networks already in place, sharing threat intelligence and coordinating response before a crisis hits.”
Ionut Mihai Chelalau, FIRST Transportation and Mobility SIG Chair and Cybersecurity Consultant, Diconium
![]()
On the Privacy Trade-Off
“Privacy, as most people understand it, cannot truly exist in today’s connected ecosystem. Every time you use an AI assistant, some of your data will ‘leak’ into training datasets, and despite claims of anonymisation, device fingerprints and usage patterns leave identifiable traces. The uncomfortable truth is that customers worldwide are willingly trading privacy for convenience, and unless strong regulations force the issue, manufacturers won’t voluntarily cut into profit margins to protect data they can monetise.”
Trey Darley, Standards SIG and Time Security SIG Lead, FIRST and Founder, Proper Tools
![]()
On Designing for Human Limits
“AI in security has a fundamental thermodynamic problem: every tool we add increases system complexity faster than it increases our ability to coordinate that complexity. As foundation models scale past trillions of parameters, we’re hitting Gödelian limits — verifying alignment across all possible states becomes formally undecidable, not merely NP-hard.
In 2026, organisations will realise they’ve crossed a Rubicon of complexity. The answer isn’t more training or more tools, it’s simpler systems that fail safely. Reduce complexity, reduce attack surface, and reduce cognitive load on the human. Security that depends on human perfection is security destined to fail.”
Hadyn Green, Principal Communications Advisor, FIRST
![]()
On Crisis Communications
“When a breach hits, silence about what happened to customer data creates a vacuum that speculation and misinformation fill fast. Organisations should establish backup communication channels across multiple networks and consider letting trusted authorities speak on their behalf. Not to dodge accountability, but to ensure accurate information reaches affected users while your team focuses on containment. The hardest problem in cybersecurity isn’t the technical response, it’s getting people to trust and act on what you’re telling them about their data.”


