In a move to bolster cybersecurity and protections for AI models against hacking and sabotage, the UK government has officially unveiled a host of new measures, which also hope to set a new global standard.
In the last year alone, 50 per cent of businesses and 32 per cent of charities have reported cyber breaches or attacks, with phishing remaining the most common type of attack.
New governmental measures will look to show developers how to securely build software, to prevent attacks akin to the one on MOVEit software last year – which saw sensitive data compromised for thousands of organisations globally.
UK Technology Minister, Saqib Bhatti, announced two new codes of practice that hope to help developers improve cybersecurity in AI models and software, at CYBERUK, the government’s cybersecurity conference.
The codes set out requirements for developers to make products more resilient against tampering, hacking, and sabotage. They hope to boost confidence in the use of AI models across most industries, helping businesses improve efficiencies, and drive growth, as well as innovation.
In a speech, Bhatti discussed findings from the government’s annual Cyber Sectoral Analysis Report also published today, which shows that the UK’s cybersecurity sector has experienced 13 per cent growth compared to the previous year – now worth almost £12billion, similar to other large sectors like the automotive industry.
The Technology Minister also revealed plans to foster cyber skills among young people, with the UK launching a campaign to encourage entries to a brand new national cyber skills competition for 18 to 25-year-olds later this year.
‘Businesses need to stay ahead’
While the new measures look to ensure developers take security measures in AI models seriously, UK firms still need to remain on top of new regulations, says Kevin Curran, senior member and professor of cybersecurity at the IEEE.
“Understanding how generative AI systems arrive at their outputs can be difficult. This lack of transparency means it can be hard to identify and address potential biases or security risks.
Kevin Curran, senior member and professor of cybersecurity at the IEEE
“Generative AI systems are particularly vulnerable to data poisoning and model theft. If companies cannot explain how their generative AI systems work or how they have reached their conclusions, it can raise concerns about accountability and make it difficult to identify and address other potential risks.
“To mitigate this, organisations should consult with data protection experts and keep abreast of regulatory changes and develop a more robust security strategy. This approach helps not only in avoiding legal pitfalls but also in maintaining consumer trust by upholding ethical AI practices and ensuring data integrity.
“Other best practices include minimising and anonymising data use, establishing robust data governance policies, conducting regular audits and impact assessments, securing data environments, and reminding staff of current security protocols.
“Moving forward, businesses need to stay ahead of potential threats. The threat landscape is constantly evolving, so organisations need to keep pace and ensure that they regularly reviewing and upgrading their defences. Some approaches that worked just a few years ago are now obsolete and given how rapidly artificial intelligence has been rolled out in recent months, enterprises must adopt more comprehensive data protection strategies and tools to secure their systems.”
Protecting personal information
The government’s announcement of new measures coincided with John Edwards, information commissioner at data protection regulator ICO, delivering a speech at the New Scientist Emerging Technologies summit. He discussed how the ICO’s most recent chapter, which opened for consultation this week, covers individual rights when training and deploying generative AI models.
“Protection for people’s personal information must be baked in from the very start. It shouldn’t just be a tick-box exercise, a time-consuming task or an afterthought once the work is done,” explained Edwards.
Sarah-Jayne Van Gruene, COO of digital payment solution provider Payen, discussed how greater emphasis on data protection in AI development is a step in the right direction.
Sarah-Jayne Van Gruene, COO of Payen
“The watchdog’s focus on data protection in AI should be a welcome step for the entire fintech industry. While AI offers exciting possibilities to revolutionise our sector, we must be cautious of potential risks. This is not a regulatory hurdle, rather it will help build trust with consumers.
“AI shouldn’t be demonised, but we must educate ourselves on how to navigate it responsibly. By emphasising data protection by design and default, the watchdog encourages a proactive approach that mitigates these risks from the outset.
“Aside from that, businesses need to upskill the workforce so that employees use it responsibly. For example, so they know why they can’t input sensitive customer information into tools such as OpenAI as, as this is breaking GDPR and data regulations.
“Overall, AI requires a balanced approach. Businesses should invest in AI but equally investing in training that supports this. Employees using the technology need to know how to do so legally and ethically.”
The post UK Government Unveils New AI Cybersecurity Measures as ICO Details Importance of Data Protection appeared first on The Fintech Times.