Click an article title to share with others in social media or leave us a comment. If you have any article suggestions you would like us to cover in the future please get in touch and drop us a line.

17Nov 2017

You can now receive Amazon CloudWatch Events when files that are written to AWS Storage Gateway’s file interface (File Gateway) are uploaded to Amazon S3. You can use these notifications to trigger in-cloud or on-premises automated workflows. For instance, a file upload notification could initiate an API call to start a data processing job using AWS analytics services, such as Amazon Athena, or invoke an AWS Lambda function for transcoding. Alternatively, you can use a file upload notification to make a RefreshCache API call to File Gateways in remote offices, so that on-premises applications can then read the uploaded data, such as reports, files for print presses, code updates or database backup files.

17Nov 2017

Starting today you can easily restore a new Amazon RDS for MySQL database instance from a backup of your existing MySQL database, including MySQL databases running on Amazon EC2 or outside of AWS. This is done by creating a backup using the Percona XtraBackup tool and uploading the resulting files to an Amazon S3 bucket. You then create a new Amazon RDS DB Instance from the backup files in Amazon S3, directly through the RDS Console or AWS Command Line Interface. 

17Nov 2017

Spot Fleet now supports a new type of scaling policy called target tracking scaling policies that you can use to set up dynamic scaling for your application in just a few simple steps. Adding Auto Scaling to your Spot Fleet is one way to maximize the benefits of AWS. Auto Scaling helps you build systems that respond to changes in demand by automatically launching or terminating Amazon EC2 instances based on conditions that you define. This dynamic scaling helps to improve application availability and reduce costs. For example, you can use Auto Scaling to automatically launch EC2 instances for your Spot Fleet when demand increases to help maintain performance, and terminate instances when demand drops to save money.

17Nov 2017

Today, AWS Identity and Access Management (IAM) made it easier for you to create and modify your IAM policies by using a point-and-click visual editor. You can now use the new visual editor to create and modify your AWS IAM policies in the IAM console. The visual editor guides you through granting permissions using IAM policies without requiring you to author policies in JSON (although you can still author and edit policies in JSON, if you prefer). This update to the IAM console makes it easier to grant least privilege by listing all the supported resource types and request conditions for the AWS service actions you select. Policy summaries identify unrecognized services and actions and permissions errors when you import existing policies, and now you can use the visual editor to correct them. To start using the point-and-click visual editor, navigate to the IAM console

16Nov 2017

Starting today, you can use Aurora Auto Scaling to automatically add or remove Aurora Replicas in response to changes in performance metrics specified by you. Aurora Replicas share the same underlying volume as the primary instance and are well suited for read scaling. With Aurora Auto Scaling, you can specify a desired value for predefined metrics of your Aurora Replicas such as average CPU utilization or average active connections. You can also create a custom metric for Aurora Replicas to use it with Aurora Auto Scaling. Aurora Auto Scaling adjusts the number of Aurora Replicas to keep the selected metric closest to the value specified by you. For example, an increase in traffic could cause the average CPU utilization of your Aurora Replicas to go up and beyond your specified value. New Aurora Replicas are automatically added by Aurora Auto Scaling to support this increased traffic. Similarly, when CPU utilization goes below your set value, Aurora Replicas are terminated so that you don’t pay for unused DB instances. 

16Nov 2017

Customers can now query Amazon S3 Inventory with standard SQL language using Amazon Athena, Amazon Redshift Spectrum, and other tools such as Presto, Hive, and Spark. You can easily get started by pointing Amazon Athena to the S3 Inventory report in ORC or CSV format with a few clicks, run ad hoc queries, and get results in seconds. This is available in all AWS Regions where Athena is available. Learn more by visiting our developer guide

16Nov 2017

You can now monitor and report on agent activity in your Amazon Connect contact center in real-time, with the data provided by Amazon Connect agent event streams. The data can be used to create dashboards in Amazon Connect that display agent information and activities, integrate the event streams into workforce management (WFM) solutions, and configure alerting tools to notify you about specific agent activity. 

16Nov 2017

Amazon Connect contact flow import/export (beta) enables you to import contact flows into, and export contact flows from, your Amazon Connect instance. Contact flows are used to define the path a customer takes to resolve their issue. Now you can easily move your contact flows from a test environment to a production environment, copy them from one region to another as you expand your customer service organization, or share contact flows with others. Exported contact flows can be used to create backup copies and used as version control for your contact flows. 

16Nov 2017

Open Neural Network Exchange (ONNX), is an open source format to encode deep learning models. The ONNX-MXNet open source Python package is now available for developers to build and train models with other frameworks such as PyTorch, CNTK, or Caffe2, and import these models into Apache MXNet to run them for inference using MXNet’s highly optimized engine. 

16Nov 2017

Beginning today, you can use the Amazon Route 53 API to view your current limits on Route 53 resources such as hosted zones and health checks. The same APIs also return how many of each resource you’re currently using. This lets you see how close you are to reaching a service limit at any time.  

16Nov 2017

In September 2017, Amazon Web Services announced the new Amazon EC2 X1e instance family with the launch of the x1e.32xlarge instances. This Amazon EC2 instance size offers 3,904 GiB of DRAM available in four AWS regions, enabling customers to run larger in-memory databases such as SAP HANA. Today, five additional sizes (x1e.xlarge, x1e.2xlarge, x1e.4xlarge, x1e.8xlarge, x1e.16xlarge) of the X1e Memory Optimized instance family are being made available. Offering the highest memory per vCPU and one of the lowest price per GiB memory among Amazon EC2 instance types, the new X1e instance sizes are ideally suited for high performance databases, in-memory databases and other memory intensive enterprise applications.

16Nov 2017

Amazon EC2 is announcing an increase to the monthly service commitment in the EC2 Service Level Agreement (“SLA”), for both EC2 and EBS, to 99.99%. This increased commitment is the result of continuous investment in our infrastructure and quality of service. This change is effective immediately in all regions, and is available to all EC2 customers.

16Nov 2017

You can now locally test and debug your deployment through the updated AWS CodeDeploy agent. The updated agent is a software package that, when installed on an instance, enables the instance to be used in CodeDeploy deployments and provides a command line interface for troubleshooting. 

Google+