October 26, 2022

How Can I Optimize Amazon RDS Performance?

Amazon Relational Database Service (Amazon RDS) makes it simple to set up, operate, and scale databases in the cloud. It is one of Amazon’s most popular services, but did you know that there are some simple ways to optimize Amazon RDS performance that can take your implementation to the next level? 

Let’s take a deeper look at Amazon RDS, and how you can make the most of your implementation without overspending. 

What Is Amazon RDS? 

Amazon RDS is a collection of managed services designed to make it easy to set up, operate, and scale databases in the cloud. Amazon RDS does not function as a database but rather is a service designed to manage relational databases. It facilitates an array of management tasks, such as deployment and maintenance of relational databases in the cloud, data migration, backup, recovery, and patching. 

Users can select from seven supported database engines: 

  • Amazon Aurora with MySQL compatibility
  • Amazon Aurora with PostgreSQL compatibility
  • MySQL
  • MariaDB
  • PostgreSQL
  • Oracle
  • SQL Server

How Does Amazon RDS Work? 

Amazon RDS is controlled by administrators through the AWS management console, Amazon RDS API calls, or the AWS Command Line Interface. The administrator simply selects the appropriate instance type for their use case and the level of customization that they need for their Amazon RDS implementation. 

Amazon RDS is useful because it is a managed service. This reduces your burden under the AWS shared responsibility model, and enables you to focus on your core line of business.

How Can I Optimize my Amazon RDS Performance?

As with any other AWS service, the key to making the most of Amazon RDS is optimization. This is partially to improve the performance of your RDS, but more importantly to reduce the costs of running it. AWS generally uses a “pay what you use” model, so even small improvements in efficiency can have a big impact on your bill at the end of the month.

Get the latest articles and news about AWS

    Metrics & Performance Monitoring 

    CloudWatch – helping you understand your usage

    The first step to optimizing Amazon RDS is understanding how your implementation is actually performing. CloudWatch enables you to track CloudWatch metrics to get a real-time understanding of how your Amazon RDS implementation is performing, including historical information. 

    Amazon RDS is able to publish the following types of metrics:

    As useful as this data is, one of CloudWatch’s most powerful features is the ability to set up alarms for your CloudWatch metrics. For example, you might want to be warned if your CPU utilization for an instance exceeds a certain percentage. This enables you to get automated alerts about potential problems before they can become a crisis. 

    CloudWatch is an essential step towards understanding what works and doesn’t work, about your implementation, but you’ll need more tools to truly optimize your Amazon RDS setup. 

    Performance Insights – Understand what to improve

    CloudWatch does a great job of warning you there’s a problem, but it’s not always easy to troubleshoot. This is where Amazon RDS Performance Insights comes in. Performance Insights uses lightweight data collection methods that enable you to see which SQL statements are causing heavy load, and why they’re doing it. 

    The solution is great because it requires no maintenance or configuration, and provides seven days of free performance history retention. It’s currently available for Amazon Aurora (PostgreSQL- and MySQL-compatible editions), Amazon RDS for PostgreSQL, MySQL, MariaDB, SQL Server, and Oracle.

    Enhanced Monitoring Metrics

    If you need a deeper overview of your RDS implementation, then Enhanced Monitoring metrics is the tool for you. It provides metrics in real time for the operating system of your DB instance directly to your Amazon CloudWatch logs account. 

    This is particularly useful if you need to troubleshoot performance issues that stem from the actual server that your RDS instance is running on. However, it should be noted that Enhanced Monitoring can increase the overall cost of your implementation if you exceed the free tier provided by Amazon CloudWatch logs. You can find more details about the exact cost calculations here.

    Performance Boosting Tools

    Storage Reduction – fighting storage over-provision

    A common challenge for Amazon RDS users is storage over-provision. This typically happens because a database has been expanded and then reduced in size, leaving a large amount of data unallocated. There is no obvious way to allocate fewer resources to a smaller database, which can mean that you are left with a lot of unused space that you’re still paying for. 

    The best way to get around this is to back up and restore your database. This enables you to launch a new instance that contains the same data with less storage allocated to it. It’s worth setting up your CloudWatch alerts to include alerts for unallocated storage, allowing you to streamline this task. 

    Read replicas – split the load 

    One of the most powerful features of RDS is Read replicas. This tool enables you to split the load of your database across multiple databases. It even allows you to scale read and write DB operations independently of each other, providing the user with granular control over their DB load.

    This has many practical uses. For example, you could create several replicas of a single DB instance to deal with high-volume application read traffic. This enables you to spread these requests across multiple copies of your data, improving reliability and increasing aggregate read-throughput. 

    If you anticipate needing to lean heavily on RDS Read Replicas, it may be worth looking into Amazon Aurora. This OS uses an SSD-backed storage layer for purpose-built database workloads, meaning that Amazon Aurora replicas share the same underlying storage as the source instance. This reduces costs and eliminates the need to copy data to replica nodes.

    Multi-AZ – Is it necessary for unimportant databases? 

    For the vast majority of production use cases, we would recommend that you enable Multi-AZ for your Amazon RDS. It is a high-availability deployment mode, which ensures that there is at least one instance on standby in case of a master DB failure. Recently introduced Multi-AZ DB cluster deployment allows having two standby DBs ready that also serve as read replicas at the same time. For anything mission-critical or merely important database, this provides a layer of protection against downtime that is invaluable. 


    However, turning it on for low-priority databases or back-ups could add up to an extra cost that you don’t need to have. Determining whether to use Multi-AZ will require case-by-case evaluation to understand the importance of a particular database. 

    Allocate the correct amount of RAM to your Amazon RDS DB instances

    As a general rule, you should allocate enough RAM so that your working set resides almost entirely in memory. The working set refers to the data and indexes that are most frequently used on your instance so that it can be a bit of a moving target. 

    To determine whether your working set is almost all in memory, you’ll need to use Amazon CloudWatch to check the ReadiOPS metric while the DB instance is under load. The value of ReadiOPS should be small and stable. There are certain cases where upgrading your instance to one with more RAM can cause a significant drop in ReadiOPS, which indicates that your working set was not entirely in memory. You should aim for a very small amount of ReadiOPS and a quite loaded RAM; that way, your DB will perform best.

    Amazon RDS Basic Operational Guidelines

    As a starting point, you need to ensure that you are following Amazon RDS basic operational guidelines. The Amazon RDS Service Level Agreement requires that you follow  these guidelines:

    • Ensure that you are using metrics to monitor your RDS usage. 
    • Scale up your DB instances as you approach storage capacity limits; you need some buffer in storage to accommodate for unexpected spikes in demand. 
    • Enable automatic backups and schedule them to occur when a backup is least disruptive to your database usage.
    • Ensure that you have sufficient I/O capacity in your database instance.
    • If your application caches the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. Caching DNS data for too long can lead to unexpected failures as your application tries to connect to an IP address that is no longer used.
    • Test failover to understand how long processes take for your particular use case and to ensure that any application that needs your database can automatically connect to a new DB instance after failover occurs.
    Want to make the most of Amazon RDS?
    To determine the best way to optimize your Amazon AWS implementation, reach out to Cloudvisor today, and we’ll do the rest.

    Other Articles

    Get the latest articles and news about AWS