Skip to content

6 minute read

How to integrate LLMs with your private knowledge platform

by Silvia Chirila on
Editor: Lucia Coppola

Learn how to integrate LLMs with a private knowledge platform in this guide. Dive into AI and discover the critical factors for successful LLM integration.

Table of contents

Diving into AI and LLMs for your business should be handled in a similar way to when you are evaluating a candidate for future business development: assess its value, cost of production, time to market, maintainability, etc.  

Watch for FREE: webinar "Integrating LLMs with a private knowledge platform"

The recent leap advancements done in the field of AI, and its increased accessibility to a wider audience, have transformed AI adoption into a "must-have", a race for success. 

In this blog post, I will iterate over the critical factors to consider when choosing your LLM model path by considering the "traditional" dimensions for assessment. I will compare different approaches and present a real-life example of how to integrate LLM models with a private knowledge platform. 

This guide aims to simplify the complex world of LLM integration by giving you the insights you need to choose the best route for your enterprise.  

Let's simplify the journey to AI, focusing on what matters for success... from a technical perspective. 

3 things to consider

Once you have established your business case, getting started from a technical perspective, you need to consider three things: 

  1. What is the model you want to use?
  2. How do you want to use the model?
  3. What is your assessment strategy?

In the following we will delve into more details for point I and II, while point III I will not expand as traditional A/B testing can get you the needed numbers. 

Choosing a model

Let's get started with the model. I will not make an inventory of the existing models on the market but instead make a pros and cons comparison of what it would mean for your organisation to adopt one strategy over the other. 

In terms of alternatives, you have two options: 

  1. Building your own model or enhancing an existing model; 
  2. Using a pre-trained model: a commercial (or not) off-the-shelf model. 

Here are the main pros and cons of the two options: 

pros and cons of choosing LLM model

Let's explore the possibilities further. 

#A Building your own model or enhancing an existing one

Building your own model or enhancing an existing one requires a great deal of data preparation.  It needs the creation of a corpus, which is a consistent set of tagged data.

The tagging must be accurate such that the model learns the correct information according to your specialised data. Of course, if your content is already semantically tagged or referenced in a knowledge graph, this can provide an advantage in the preparation process. 

Selecting the route between starting from scratch or enhancing will impact the amount  of input data and the infrastructure cost, as the model training is a resource-intensive operation. 

On the upside, the model will remain private, and so will your data, which will be tuned to the corresponding domain. Also, your organisation controls of when or how to upgrade the model as this technology advances. 

But on the downside, the preparation effort is spans over long period of time, and it is a prerequisite before enabling any usage in the targeted application, which can considerably extend the time lapse even for a proof of concept.  

Additionally, fine-tuning might add specific bias in your model, or you might end up with an over tuned model that work only for the example you provided and is not generic enough. 

#B Using a pre-trained model

On the other hand, you can work directly with a pre-trained model by shifting the load on the services using the selected model and leveraging your ecosystem's wealth of pre-organised knowledge. 

Starting right away is an advantage. Opting out of sharing data with services wrapping the models is no longer an issue. To ensure that the outcomes are restricted and specialised to your domain, you must lean heavily on interaction strategies, such as content vectorisation or prompt engineering.  

Usage patterns

#1 Usage pattern: Direct fine-tuned model usage 

Datavid Usage Pattern 1

Building upon the assumption that you have chosen to build your own model, the services which will facilitate your consumers to leverage the model will just be executing prompts against the prepared model. 

Easy, assuming that all the hard work on the model is done. However, hallucinations may occur, and as there is no way to trace how the data was represented, provenance for the outcomes cannot be retrieved.

#2 Usage pattern: Data vectorisation based on an existing model

Datavid Usage Pattern 2

This strategy is the most widely used and referenced way of using Large Language Models. 

This way of working with models requires data transformation:  all data needs to be vectorised – transformed into numeric tokens – based on the embedded configurations of the selected model. These representations must end up in a vector database, which will be used in your respective services.

As the vector DB will be in your ecosystem, privacy will be of no concern; as the model interactions will be instructed to use the DB, specialisation of the outcome will not be of any concern either; moreover, as these vector DBs allow to specify metadata, you can even resolve the provenance issue.

You must pay attention to the fact that the vector transformation represents, to some extent, data duplication, and pipelines need to be set up to ensure data synchronisation and completeness.

In addition to this, taking advantage of model evolution requires a complete re-run of the transformation of your data, which can become challenging with large volumes. 

#3 Usage pattern: Model interrogation with the generation of private context

Datavid Usage Pattern 3

This architecture does not add any dependency (other than the selected model) to your technical ecosystem - no new database, no new pipelines.

The key to this interaction is to create sufficient context and group it with the incoming input to end up with something like: "John has three dogs. How many dogs has John?". 

This interaction can help you leverage all the knowledge management systems you may already have – such as knowledge graphs.

The workings of this reside in the way that you could query your data based on input to put together a context, which can be snippets of text, synonym lists, known terms/concepts relations, and more. The more semantic structure you have in your data, the stronger and more accurate your context will become.

As all interactions are via prompts, your data will remain private. As your application controls context creation, the outcomes will be appropriately specialised. 

Better yet, as you can prompt the LLM to use only the provided context, the risk of hallucinations is almost gone.

However, not having all the semantic tools in place may lead to insufficient context creation. It must not be seen as a showstopper but rather as an incentive to prioritise knowledge management in your organisation, which would have a broader benefit over time. 

Pros and cons of usage patterns

Datavid pros and cons of user patterns

None of the three presented approaches can become a silver bullet to ensure your business case realisation.

All three of them have pitfalls on different dimensions. For example, using your model or vectorising your data comes with a higher cost and longer cycles for model updates, while building your context can be as strong as the existing tools in your system.  

The features you have in your existing system or want to have in the future must play a role in the decision.

For example, provenance linking would be necessary if the user does research tasks. If your content is accessible via entitlements, access-controlled policies must be materialised as metadata for the vectorised DB or considered when building the context for the LLM prompt. 

Ultimately, with this business opportunity that the LLMs integration has to offer, it is the usual conundrum of prioritising two of the legendary three: Time, Cost, and Quality. 

Integrating LLMs with a private knowledge platform: Datavid Rover example

We have a knowledge-based data platform to which we have loaded articles from the Medical Archive regarding COVID-19 for solely demo purposes. In addition, we have used the COVOC ontology and Disease ontology as our knowledge base. 

Even before considering LLMs, our semantic platform Datavid Rover supported content exploration via a cognitive search engine, dashboards for data analytics, and knowledge graphs exploration while federating content with semantics. 

With the addition of LLMs, we have achieved a Q&A user experience over existing content and response provenance while giving the user more control over the used data. 

A unified sample architecture

You might wonder which of the options we have used from the ones I presented earlier. We went for the one that maximised the usage of the assets we already had embedded in our platform. 

datavid unified sample architecture

Our platform supports unstructured data integration via an enhanced data pipeline into a data hub. 

In this pipeline, we semantically tag the data using taxonomies and ontologies, which enrich the knowledge graph. With this setup, we have enabled semantic search features on top of the content we have in the system. 

Hence, we have selected the third option of generating context on the fly while handling user queries, namely, via an orchestrating service that uses all of the existing components.  

Once the query is submitted, the service starts to process the input.  

As first step, the service condenses or rephrases the query via specific prompts such that it would be a standalone question (especially when there's a chat history).  

Then, we're using our semantic tagging service and knowledge graph to extract the user-researched concepts. Not all relevant query concepts can be contained in an ontology, so we're applying an NLP task to extract the remaining noun phrases.  

Based on the extracted concepts and phrases, performing a semantic search would retrieve data snippets for the next generation. For additional context, we're also building up the list of synonyms in the context.  

Finally, we put the query and the generated context together and submit them to the LLM model to get the answer generated via a specialised prompt, indicating that we should use the context given. 

Datavid Rover Covid-19 demo

Click here if you don't see the video above.

Takeaways

Having gone through quite a few topics and then through the demo in which we have proved one reliable way of combining knowledge graphs and generative AI, the things to take away could be summarised as follows. 

  1. Continue to have a lean, proof-of-value-based approach when implementing your AI-based business case.

    With the continuous evolution of AI technologies and models, running a long-term project will expose you to the risk of using something considered obsolete in no more than six months, for example. 
  2. Leverage your knowledge management system in this venture.

    Knowledge graphs, in particular. As we have demoed, and as it is more and more referenced in the specialised literature, they offer a great head start in implementing generative AI, but also more deterministic and reliable results.

    Not having such a system in place should not be a showstopper for your initiative but rather another reason besides boosted data governance and enhancements for data analytics 

  3. And finally, be critical. Refrain from falling under the spell of "AI will solve it all".

    Start small and identify where AI can help you bring more for you without disregarding your existing solutions. Collaborate with your IT departments or data partners to evaluate your possibilities within your privacy and specialisation requirements. 

New call-to-action