Getting Started with AWS Bedrock

Getting Started with AWS Bedrock

posted Originally published at alexypulivelil.medium.com 2 min read

Developers can create and scale generative AI applications with Amazon Bedrock, a fully managed service, by utilising foundation models from AWS and other suppliers. In this guide, I’ll walk you through getting started with AWS Bedrock and invoking the Amazon Titan Text Lite v1 model for text generation.

Prerequisites

Before you begin, ensure that you have the following:

  • AWS Account with access to Amazon Bedrock(For testing will be using AmazonBedrockFullAccess)
  • AWS CLI installed and configured with appropriate permissions
  • Boto3 (AWS SDK for Python) installed on your machine
    You can install Boto3 using:

    pip install boto3

Step 1: Set Up AWS Credentials
If you haven’t already configured your AWS credentials, run:

aws configure

Enter your AWS Access Key ID, Secret Key, and select your preferred region where Amazon Bedrock is available (e.g., us-east-1).

Step 2: Initialize the Bedrock Client
To interact with Amazon Bedrock, we need to initialise the AWS Bedrock runtime client using Boto3:

import boto3
import json

# Initialize Bedrock client
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")

Step 3: Invoke Amazon Titan Text Lite v1
Let’s create a simple script to invoke Amazon Titan Text Lite v1 for generating a text response.

# Define the input text
question = "What is the capital of India?"

# Prepare the payload
payload = {
    "inputText": question,
    "textGenerationConfig": {
        "maxTokenCount": 100,
        "temperature": 0.5,
        "topP": 0.9
    }
}

# Invoke Titan Text Lite v1
response = bedrock.invoke_model(
    modelId="amazon.titan-text-lite-v1",
    contentType="application/json",
    accept="application/json",
    body=json.dumps(payload)
)

# Parse response
result = json.loads(response["body"].read().decode("utf-8"))

# Extract and print the output text
if "results" in result and isinstance(result["results"], list):
    print("Answer:", result["results"][0]["outputText"].strip())
else:
    print("Unexpected response format:", result)

Step 4: Running the Script
Save the script as invoke_bedrock.py and run it using:

python invoke_bedrock.py

Expected Output:
Answer: New Delhi is the capital of India. It is situated in country's federal district, which is known as the National Capital Territory of Delhi (NCT), and is located in the Indian subcontinent.

Step 5: Fine-tuning Model Parameters

Amazon Titan models allow temperature and topP tuning for response variation:

  • temperature: Controls randomness (Lower = More deterministic, Higher = More creative)
  • topP: Controls sampling probability (Higher = More diverse responses)
    Adjust these values in the textGenerationConfig section for different results.

Conclusion
You have successfully invoked the Amazon Titan Text Lite v1 model using AWS Bedrock! You can now integrate this into your applications for chatbosummarisationtion, and content generation.

Happy coding!

If you read this far, tweet to the author to show them you care. Tweet a Thanks
Cool guide Alexy! Super easy to follow with the code snippets. Quick question—does the response format stay the same for all Titan models, or do we need to tweak it? Also, maybe a quick error-handling tip for beginners? Would be a lifesaver. Keep the Bedrock stuff coming! :-) !!!!!!!!!!!!!!!!!!

More Posts

Using offline AI models for free in your Phyton scripts with Ollama

Andres Alvarez - Mar 5

Securing ASGI Server with AWS Cognito

Shaun - Feb 20

I Tested the Top AI Models to Build the Same App — Here are the Shocking Results!

Andrew Baisden - Feb 12

Agentic AI

Aparna Bhat - Jan 20

Understanding AI Design Patterns: A Deep Dive into the RAG Design Pattern

Aparna Bhat - Jan 17
chevron_left