AWS late on Thursday added a new prompt optimization tool to Amazon Bedrock, its fully managed service for building, deploying, and scaling generative AI applications.
The tool, Amazon Bedrock Advanced Prompt Optimization, can be accessed through the Bedrock console, and is designed to automatically refine prompts for better accuracy, consistency, and efficiency across multiple large language models, the hyperscaler wrote in a blog post.
The tool works by first evaluating prompts against user-defined datasets and metrics, then rewriting them to optimize them for up to five inference models. It then benchmarks the optimized versions against the originals across the models to help developers identify the best-performing configurations for specific workloads, AWS said.
Currently, it is generally available across multiple AWS regions, including US East, US West, Mumbai, Seoul, Singapore, Sydney, Tokyo, Canada (Central), Frankfurt, Ireland, London, Zurich, and São Paulo.
The company said that enterprise customers will be billed for its use based on the Bedrock model inference tokens consumed during the optimization process, using the same per-token pricing rates applied to standard Bedrock inference workloads.










