FEATURE : CYBERSECURITY and much earlier than that , there was an obsession with Big Data . Unfortunately , the industry too often grows attached to whichever latest and greatest technology is claiming the headlines that year .
Practitioners should not be focused on chasing down a panacea . It must be accepted that there is no “ easy button ” in cybersecurity and organizations should instead focus their efforts on doing the basics well and leveraging technology – including AI – as part of an overall risk-based security program .
Proceed with caution
Organizations must be careful to avoid an overreliance on generative AI tools . LLMs and other generative models are clever , but they only provide the most statistically probable set of words in answer to a question or the item it has been prompted to produce .
Effectively , the tool is just regurgitating variations on information that it has already been fed .
Generative AI can be helpful in writing code , which the industry has already seen with threat actors developing it for malicious purposes , but practitioners need to exercise more caution . It is not enough to rely on the code produced by a generative AI model unless an engineer can understand the code that is generated . If an engineer is not capable of writing the code on their own , they may not fully understand what the generated code will do when it is run or where performance or security issues may exist .
Lastly , despite some of the conversations that have started taking place , today ’ s LLMs are nowhere near capable or sophisticated enough to replace security operations center ( SOC ) analysts . A human being is still required to make decisions and evaluate risk .
Navigating a future with generative AI
For organizations that want to unlock the benefits of generative AI , they should continue to experiment with the technology but must understand the risks associated with it .
One of the key concerns is the sensitive data risk attached to generative AI . Company intellectual property ( IP ) can be put at risk when fed to LLMs , with emerging techniques like prompt engineering being leveraged by threat actors to extract information that AI models .
Another emerging consideration involves copyright .
Debates are unfolding in real-time as the industry grapples with the question of who controls the outputs
generated by AI models . With this conversation ongoing , companies must be cognizant that AIgenerated results may not end up belonging to them .
The only way for organizations to step into the world of generative AI safely and securely is with a solid foundation of AI governance . Companies must make sure the appropriate policies are in place and communicated to their business users to help prevent misuse and mistakes .
With an understanding of the potential risks and the necessary safeguards implemented , security practitioners can harness the power of generative AI for good . p
TODAY ’ S LLMS ARE NOWHERE NEAR CAPABLE
OR SOPHISTICATED ENOUGH TO REPLACE
SECURITY OPERATIONS
CENTER ( SOC ) ANALYSTS . A HUMAN BEING IS STILL REQUIRED TO
MAKE DECISIONS AND EVALUATE RISK .
www . intelligentcio . com INTELLIGENTCIO NORTH AMERICA 55