Amazon Bedrock
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon. Each model is accessible through a common API which implements a broad set of features to help build generative AI applications with security, privacy, and responsible AI in mind.
This guide will walk you through an example using Amazon Bedrock SDK with vecs
. We will create embeddings using the Amazon Titan Embeddings G1 – Text v1.2 (amazon.titan-embed-text-v1) model, insert these embeddings into a PostgreSQL database using vecs, and then query the collection to find the most similar sentences to a given query sentence.
Create an Environment
First, you need to set up your environment. You will need Python 3.7+ with the vecs
and boto3
libraries installed.
You can install the necessary Python libraries using pip:
_10pip install vecs boto3
You'll also need:
Create Embeddings
Next, we will use Amazon’s Titan Embedding G1 - Text v1.2 model to create embeddings for a set of sentences.
_34import boto3_34import vecs_34import json_34_34client = boto3.client(_34 'bedrock-runtime',_34 region_name='us-east-1',_34 # Credentials from your AWS account_34 aws_access_key_id='<replace_your_own_credentials>',_34 aws_secret_access_key='<replace_your_own_credentials>',_34 aws_session_token='<replace_your_own_credentials>',_34)_34_34dataset = [_34 "The cat sat on the mat.",_34 "The quick brown fox jumps over the lazy dog.",_34 "Friends, Romans, countrymen, lend me your ears",_34 "To be or not to be, that is the question.",_34]_34_34embeddings = []_34_34for sentence in dataset:_34 # invoke the embeddings model for each sentence_34 response = client.invoke_model(_34 body= json.dumps({"inputText": sentence}),_34 modelId= "amazon.titan-embed-text-v1",_34 accept = "application/json",_34 contentType = "application/json"_34 )_34 # collect the embedding from the response_34 response_body = json.loads(response["body"].read())_34 # add the embedding to the embedding list_34 embeddings.append((sentence, response_body.get("embedding"), {}))
Store the Embeddings with vecs
Now that we have our embeddings, we can insert them into a PostgreSQL database using vecs.
_16import vecs_16_16DB_CONNECTION = "postgresql://<user>:<password>@<host>:<port>/<db_name>"_16_16# create vector store client_16vx = vecs.Client(DB_CONNECTION)_16_16# create a collection named 'sentences' with 1536 dimensional vectors_16# to match the default dimension of the Titan Embeddings G1 - Text model_16sentences = vx.get_or_create_collection(name="sentences", dimension=1536)_16_16# upsert the embeddings into the 'sentences' collection_16sentences.upsert(records=embeddings)_16_16# create an index for the 'sentences' collection_16sentences.create_index()
Querying for Most Similar Sentences
Now, we query the sentences
collection to find the most similar sentences to a sample query sentence. First need to create an embedding for the query sentence. Next, we query the collection we created earlier to find the most similar sentences.
_27query_sentence = "A quick animal jumps over a lazy one."_27_27# create vector store client_27vx = vecs.Client(DB_CONNECTION)_27_27# create an embedding for the query sentence_27response = client.invoke_model(_27 body= json.dumps({"inputText": query_sentence}),_27 modelId= "amazon.titan-embed-text-v1",_27 accept = "application/json",_27 contentType = "application/json"_27 )_27_27response_body = json.loads(response["body"].read())_27_27query_embedding = response_body.get("embedding")_27_27# query the 'sentences' collection for the most similar sentences_27results = sentences.query(_27 data=query_embedding,_27 limit=3,_27 include_value = True_27)_27_27# print the results_27for result in results:_27 print(result)
This returns the most similar 3 records and their distance to the query vector.
_10('The quick brown fox jumps over the lazy dog.', 0.27600620558852)_10('The cat sat on the mat.', 0.609986272479202)_10('To be or not to be, that is the question.', 0.744849503688346)