AWS Startup

AWS Startup

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

 

Amazon WorkSpaces Now Supports Configurable Storage and Switching Between Hardware Bundles

Today, Amazon WorkSpaces is making two new features available. First, you can now configure how much storage your WorkSpaces get when you launch them, and increase the storage for a running WorkSpace at any time. Second, you can now change the hardware bundle that your WorkSpace is running on with a simple reboot. All of your applications, data, and storage stay the same, but now you can move to a more powerful bundle to support resource-intensive applications, or move to a less powerful bundle to save costs. With these features, Amazon WorkSpaces now provides additional flexibility to support the diverse needs of end users, while still helping you optimize your costs.

With configurable storage, you can select your root and user volume starting sizes when you launch a new WorkSpace, and then increase these as needed, up to a limit of 1,000 GB. All data is preserved, and you can continue to use your WorkSpace while your volumes are increased in size. To ensure that your data is preserved, the volume sizes of either volume cannot be reduced after a WorkSpace is launched.

With hardware bundle switching, you can switch between the Value, Standard, Performance, or Power hardware bundles as needed. You don’t need to delete your WorkSpace and create a new one, and your storage configuration is preserved even after you have configured storage using the configurable storage feature.

 

Amazon ECS Adds ELB Health Check Grace Period

The Amazon Elastic Container Service (Amazon ECS) service scheduler now allows you to define a grace period to prevent premature shutdown of newly instantiated tasks.

Previously, if Amazon ECS tasks took a long time to start, Elastic Load Balancing (ELB) health checks could mark the task as unhealthy and the service scheduler would shut the task down prematurely.

Now, you can specify a health check grace period as an Amazon ECS service definition parameter. This instructs the service scheduler to ignore ELB health checks for a pre-defined time period after a task has been instantiated.

 

Throttling Logic for the Amazon ECS Service Scheduler

The Amazon Elastic Container Service (Amazon ECS) service scheduler now includes logic that throttles how often tasks are restarted if they repeatedly fail to launch.

Previously, if tasks had issues in the container image, task definition, network configuration, they could fail to become healthy when started by the Amazon ECS service scheduler. Continuous restart attempts by the service scheduler could lead to overall application performance degradation and incur cost.

Now, the Amazon ECS service scheduler includes logic that checks for tasks that continuously fail to become healthy after instantiation, increases the duration between restart attempts, eventually increasing the duration to 15 minutes, and adds an event to the Amazon ECS service event messages so you can take corrective action. This logic works for tasks using both the AWS Fargate and EC2 launch types.

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *


Call Us