Intelligent CIO North America Issue 60 | Page 38

FEATURE: INVESTMENTS
Here’ s the thing – you can’ t stop this train, so you better learn to steer it.
My approach is to let a thousand flowers bloom but build the right garden walls first.
Your marketing team is already using ChatGPT, your finance folks are building automation, your engineers are coding with AI assistants. The question isn’ t whether people will use AI – it’ s whether they’ ll do it safely or go rogue with shadow IT.
The key is investing in the right foundational infrastructure.
First, you need an LLM proxy that routes all AI interactions through a centralized system with enterprise agreements – this ensures your data isn’ t being used to train someone else’ s models.
Second, provide secure app hosting where people can build and deploy agents without jumping through security hoops, with both sandbox environments for experimentation and productiongrade deployments. proxy that lets them build agents while maintaining strict role-based access controls over our CRM, data lake and corporate wiki.
The result? People are building agents that talk to other agents. Our automation is getting sophisticated, but it’ s all happening within the guardrails we’ ve set up.
Your job as CIO isn’ t to say no to AI – it’ s to make saying yes as safe as possible. Build the infrastructure, set the guardrails then get out of the way and let your people innovate.
The companies that figure this out will have people experimenting safely, learning fast and sharing knowledge across teams.
Eamonn O’ Neill, Co-founder and CTO, Lemongrass
If your organization is like most enterprises today, its Generative AI adoption strategy hinges on using cloudbased AI services delivered through hyperscale platforms – such as Microsoft Copilot and Google Gemini.
You also need comprehensive security tooling in place – SAST, DAST, NHI security, firewalls and role-based access control – but implemented in lightweight ways that don’ t stop people from innovating.
This is especially critical because AI tools are being used more for coding by people who aren’ t technical, and they’ re prone to leaking secrets and hardcoding credentials.
Don’ t forget automation platforms like Zapier, n8n or Make. com that let people plug systems together without coding – this is where non-technical users can create powerful workflows safely.
You also need smart data access management that connects your corporate systems to AI agents with role-based access control. People should only see data they’ re supposed to see – but when AI can access real company data safely, that’ s where the magic happens.
And here’ s what nobody talks about – you’ re about to have an explosion of service accounts, API keys and automated systems.
You need tools to manage these non-human identities before they become a security nightmare.
At GitGuardian, we practice what we preach. We use AWS Bedrock for LLM access with enterprise data protection agreements, and our team uses an LLM
Part of the appeal of these AI solutions is that they are managed by hyperscale vendors with excellent track records of providing high availability and performance. You might think, then, that there is little your organization needs to do to ensure the resilience of its AI services.
In reality, though, there are a litany of risks surrounding Generative AI that extend beyond the hosting of AI services themselves.
As this article explains, managing those risks is crucial to developing an effective AI resilience strategy.
AI resilience is the ability of AI infrastructure and services to operate reliably, even in the face of unexpected challenges or disruptions. When AI systems are resilient, they rarely crash or experience significant performance degradations.
As AI becomes a more central part of enterprise IT strategies, ensuring the resilience of AI systems is also growing more critical.
For companies that depend on AI to power missioncritical business services, keeping AI solutions resilient is just as important as maximizing the uptime of traditional IT resources like servers and storage infrastructure.
It’ s the only way to retain the ability to innovate using AI while still keeping AI-related risks in check.
38 INTELLIGENTCIO NORTH AMERICA www. intelligentcio. com