Intelligent CIO North America Issue 46 | Page 76

t cht lk

t cht lk

It is founded on two critical components : low-latency vector search ( leveraging NVIDIA RAPIDS RAFT technology ) and the ability to perform real-time , complex data queries .
This combination enables enterprises to enrich their generative AI applications with domain-specific analytical insights – derived directly from the latest operational data .
But Kinetica says it can go further .
To truly understand data , AI needs context about the structure , relationships and meaning of tables and columns in an enterprise ' s data .
Kinetica has built native database objects that allow users to define this semantic context for enterprise data . An LLM can use these objects to grasp the referential context it needs to interact with a database in a context-aware manner .
“ Kinetica ’ s real-time RAG solution , powered by NVIDIA NeMo Retriever microservices , seamlessly integrates LLMs with real-time streaming data insights , overcoming the limitations of traditional approaches ,” said Nima Negahban , Cofounder and CEO , Kinetica .
“ This innovation helps enterprise clients and analysts gain business insights from operational data , like network data in telcos , using just plain English . All they have to do is ask questions and we handle the rest .”
All the features in Kinetica ’ s generative AI solution are exposed to developers via a relational SQL API and LangChain plugins .
This means that developers building applications can harness all the enterprise-grade features that come with a relational database .
This includes control over who can access the data ( Role-Based Access Control ), reduce data
76 INTELLIGENTCIO NORTH AMERICA www . intelligentcio . com