Skip to content

AI's Function in Cybersecurity Defense

AI assistants serving as valuable allies for novice security specialists, easing their initial dread. Established security analysts stand to benefit as well, with the technology automating tedious tasks and allowing them to focus on honing their expertise.

AI's contribution to digital security measures
AI's contribution to digital security measures

AI's Function in Cybersecurity Defense

In the rapidly evolving world of technology, organizations are increasingly recognising the need to address the ethical standards of Artificial Intelligence (AI) as they delve deeper into its impact. This is a critical concern, especially as AI becomes more accessible to the public, both for beneficial and malicious purposes.

One such organisation at the forefront of this discussion is Google DeepMind. While the name of the Chief Information Security Officer (CISO) focusing on AI ethics and security impacts is not explicitly known, the company has made significant strides in this area. Notable initiatives include the establishment of the DeepMind Ethics and Society unit, led by philosopher Nick Bostrom.

Other tech giants like Microsoft, Facebook, and Google are also embracing AI to strengthen their cybersecurity measures. They are utilising AI red teams, a group of experts with a mix of cybersecurity and machine learning backgrounds, to investigate vulnerabilities in their AI systems. These teams are particularly useful for anyone working with large computational models or general purpose AI systems that have access to multiple applications.

The advent of generative AI, represented by models like OpenAI's ChatGPT and other chatbots, represents a significant shift in the realm of AI. However, this technology also brings new challenges. One of the dangers of generative AI is the lack of safeguards to prevent AI hallucinations, where the technology provides incorrect information. This underscores the importance of ensuring that these models are not only providing accurate information but also adhering to ethical standards, such as not disclosing sensitive or regulated data.

Security teams need to consider these ethical implications carefully. Disinformation, for instance, can be an example of an ethics problem rather than a security problem. Rumman Chowdhury, co-founder of Bias Buccaneers, emphasises the need to address both ethics and security problems in AI.

The distinction between ethics and security in AI is important. While cybersecurity focuses on combating malicious actors, ethics focuses on context and unintended consequences. Vijay Bolina, a CISO with Google DeepMind, agrees, stating that AI red teams are an important way to challenge safety and security controls in AI systems.

AI is poised to revolutionise the field of cybersecurity, tipping the scales in favour of defenders, according to Vasu Jakkal, corporate vice president with Microsoft Security Business. Jakkal believes that AI can help solve the talent shortage in AI cybersecurity, with generative AI acting as an ally for new security professionals, helping them learn about investigation, reverse engineering, and threat hunting.

Machine-driven tools have already improved cybersecurity systems by handling repetitive tasks, freeing up seasoned security analysts to develop their skills further. However, it is crucial to remember that AI is not infallible. It can display distributional bias and hallucinations, creating new security risks.

As corporate stakeholders become more interested in understanding the risk calculus of their technology stacks, the importance of addressing AI ethics and security cannot be overstated. It is a complex and evolving field, but one that is essential for ensuring the safe and ethical use of AI in the future.

Read also: