A bear playing hopscotch

Authorizing LLM responses by filtering vector embeddings

Shaun Verch

Large Language Models (LLMs) open up a new way to interact with your data. Rather than traditional searches, your users can use a natural language interface to get back relevant information.

However, this also creates a new class of security problems, where prompt engineering can be used to access information that shouldn't be available.

To give just one example, imagine you create a chatbot for a company that's using your document sharing app, and an employee asks it "tell me what coworker X's medical waiver says." The chatbot obviously shouldn't return this information, but how can we make sure it knows that? The solution is to make sure that the inputs to your chatbot have the same access control as the rest of your app.

With Oso Cloud and Postgresql, this is actually pretty easy. The rest of this post will provide an overview of how it works.

Where Your Data Gets In: RAG

The architecture we'll talk about today is Retrieval Augmented Generation (RAG). The basic idea is that your documents are used to provide extra context to the prompts that are sent to the chatbot.

Suppose you want the chatbot to use information from internal company documents and Slack conversations in its responses. As part of setting this up, you take your corpus of data and pass it through an LLM to generate a dataset of vectors, called embeddings, that map to the data. You then store these embeddings in a postgresql database that has the pgvector extension enabled. The pgvector extension allows you to do vector similarity searches within postgres, whichis how you'll generate responses to our users’ prompts.

Generating embeddings from internal data sources for postgresql

When a user submits a prompt to the chatbot, you convert the prompt to an embedding using the same LLM that you used for your data. You then do a vector similarity search to fetch the most relevant content from your dataset, and pass that as extra context to the chatbot LLM. The chatbot then generates a response and returns it to the user.

LLM Chatbot data flow without authorization

You might immediately see the security implications here. What if we don't want a specific user to access context that the chatbot might repeat back to them?

This is why we need authorization.

Where To Authorize?

The eternal question of authorization is: where do you make authorization decisions? This is one of the big themes of Authorization Academy . The main way to answer this question is to ask: where do I have the necessary information to know who should have access to what?

In our RAG system, by the time the LLM is generating the response for the user, we have lost all authorization information and have no idea which parts are sensitive. This means that we must make our authorization decisions before we've provided the "context" to the user's prompt.

Authorization should happen before sending the context to the chatbot's LLM

There are two ways we could approach this:

  1. Generate separate vector databases, so that each lookup happens in a database that only contains content for that user.
  2. Restrict the results that we get back from our vector database, based on what the user actually has access to.

We'll focus on the second option. While there are definitely use cases for option one, option two is more flexible and we'll get option one "for free" by using the same approach we describe here in the offline generator.

List Filtering In Oso

Now that you know exactly where you need to apply authorization, the solution is surprisingly simple. Suppose you store documents in an internal knowledgebase. The content of these documents is stored as "blocks," which are the units of information that we'll perform our similarity searches against.

To begin with, we have a query that takes a vector queryEmbeddingSql as input and returns an ordered list of the most similar pieces of content (blocks):

async function queryNoAuthz(queryEmbedding) {
  const queryEmbeddingSql = pgvector.toSql(queryEmbedding);
  const matchThreshold = 0.8;

  const results = await sql`
    SELECT
      b.content,
      1 - (be.embedding <=> ${queryEmbeddingSql}) as similarity
    FROM block_embeddings be
    JOIN blocks b ON b.id = be.block_id
    WHERE 1 - (be.embedding <=> ${queryEmbeddingSql}) > ${matchThreshold}
    ORDER BY similarity DESC; 
  `.execute(db);
}

But we don't want all of the most similar pieces of content. We only want the content this particular user has access to. How should we do this? Can we just filter the results as we get them back?

Well, yes we can technically do that, but that can be terribly inefficient. In the original query, we can easily add a LIMIT 10 to get only the ten most relevant pieces of content, but now we have no guarantee that the current user has access to those documents.

We need a query that looks something like this:

    SELECT
      b.content,
      1 - (embedding <=> ${queryEmbeddingSql}) AS similarity
    FROM block_embeddings be
    JOIN blocks b ON b.id = be.block_id
    JOIN documents d ON d.id = b.document_id  
    WHERE 1 - (be.embedding <=> ${queryEmbeddingSql}) > ${matchThreshold}
      AND d.folder_id IN (${sql.join(foldersTheUserCanAccess)})
    ORDER BY similarity DESC

That is, we somehow need to restrict the results to only documents in folders that the user has permission to view (foldersTheUserCanAccess).

Fortunately, with Distributed Authorization, this is exactly what Oso's List Filtering does. Given a user, Oso will evaluate as much of the query as it can and return to the client the rest of the query that must be run against the local database to fully determine access. Where you draw the line between what gets evaluated at Oso and what gets evaluated at your client is flexible, determined by whatever makes the most sense for your application.

In this example, the chatbot needs information about documents and blocks, but it doesn’t need to know anything about folders other than which folders contain which documents. So we can store folder access control information in Oso Cloud, and then store the information that maps folders to documents in the local database.

To tell Oso how to use your application's postgresql database to resolve the document-to-folder relation as facts, you just pass it the required SQL in a YAML config file, like this:

facts:
  has_relation(Document:_, String:parent, Folder:_):
    db: documents
    query: SELECT id, folder from documents

Now, you can use the new listLocal function to generate the authorized list of documents. Here's what the final code looks like to do this in Node.js:

import { sql } from "kysely";
import pgvector from 'pgvector/kysely';

...

async function queryDistributedAuthz(queryEmbedding){ 
  const matchThreshold = 0.8;

  const results = await db
    .selectFrom("blocks")
    .select(["content", cosineDistance("embedding", queryEmbedding).as("cosine_distance")])
    // oso.listLocal returns an intermediate result,
    // which is converted to a WHERE clause as defined in the YAML config file
    // to complete the evaluation of the authoirzation query
    .where(sql.raw<boolean>(await oso.listLocal(alice, "read", "Block", "id")))
    .innerJoin("block_embeddings", "block_embeddings.block_id", "blocks.id")
    .where(cosineDistance("embedding", queryEmbedding), "<", 1 - matchThreshold)
    .orderBy(cosineDistance("embedding", queryEmbedding))
    .execute();
}

Conclusion

This post was about adding authorization to chatbots with Oso. Of course, we didn't cover a lot of the details of how you design your schema and generate the embeddings. Those are all up to you, and there are lots of great tutorials online. The key point here is that whatever lookup you're using to provide extra context with requests, you can use this same approach for authorization. What’s more, you can use local application data to filter the results based on your shared authorization logic in Oso Cloud. It's something everyone has to think about, and hopefully this makes it just a little bit easier.

Would you like to know more about this, or see a more in-depth treatment? Reach out to us on Slack or Schedule a 1x1 with an Engineer and let us know!  Or bring your questions to our upcoming virtual event on May 8: Distributed Authorization – What it means, how to use it, and how we built it.

Want us to remind you?
We'll email you before the event with a friendly reminder.

Write your first policy