Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Friday 14 September 2012

AWS Expands in Japan

Amazon Web Services (AWS) is expanding in Japan with the addition of a third Availability Zone.
The move means that AWS will most likely be adding more data centers to keep up with the steady demand in service it has had since it first began offering its service in Tokyo 18 months ago.

For people who are not aware of Availability zones and Regions of AWS -

Amazon Web Services serves hundreds of thousands of customers in more than 190 countries.
Currently, AWS has spanned across 8 regions around the Globe.
Each region has multiple availability zones.
Each availability zone can encompass multiple data centers.

See a detailed list of offerings at all AWS locations

Extracted below a nice blog post by Jeff:

We announced an AWS Region in Tokyo about 18 months ago. In the time since the launch, our customers have launched all sorts of interesting applications and businesses there. Here are a few examples:
    • Cookpad.com is the top recipe site in Japan. They are hosted entirely on AWS, and handle more than 15 million users per month.
    • KAO is one of Japan's largest manufacturers of cosmetics and toiletries. They recently migrated their corporate site to the AWS cloud.
    • Fukoka City launched the Kawaii Ward project to promote tourism to the virtual city. After a member of the popular Japanese idol group AKB48 raised awareness of this site, virtual residents flocked to the site to sign up for an email newsletter. They expected 10,000 registrations in the first week and were pleasantly surprised to receive over 20,000.
Demand for AWS resources in Japan has been strong and steady, and we've been expanding the region accordingly. You might find it interesting to know that an AWS region can be expanded in two different ways. First, we can add additional capacity to an existing Availability Zone, spanning multiple datacenters if necessary. Second, we can create an entirely new Availability Zone.
Over time, as we combine both of these approaches, a single AWS region can grow to encompass many datacenters. For example, the US East (Northern Virginia) region currently occupies more than ten datacenters structured as multiple Availability Zones.
 
AWS Tokyo Region and Availability Zones
 
Today, we are expanding the Tokyo region with the addition of a third Availability Zone. 
This will add capacity and will also provide you with additional flexibility. As is always the case with AWS, untargeted launches of EC2 instances will now make use of this zone with no changes to existing applications or configurations. If you are currently targeting specific Availability Zones, please make sure that your code can handle this new option.

Wednesday 12 September 2012

AWS Week in Review - September 3rd to September 9th, 2012














Let's take a quick look at what happened in AWS-land last week:

Tuesday, September 4
Wednesday, September 5
Friday, September 7

SOURCE
 

Monday 10 September 2012

Getting Started with Amazon Glacier



Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. In order to keep costs low, Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. With Amazon Glacier, customers can reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions.

Retrieving archives from Amazon Glacier requires the initiation of a job. Jobs typically complete in 3 to 5 hours. You organize your archives in vaults
The quick start video for Amazon Glacier walks you through on how-to use the AWS Management console to create vaults in Amazon Glacier.

To upload the data to Amazon Glacier, users must use the SDKs/APIs provided by AWS for uploading the data.








In case your data is too huge to be uploaded through internet, you can make use of another AWS Service - AWS Import/Export. It accelerates moving large amounts of data into and out of AWS using portable storage devices (see supported devices) for transport. You can ship your device along with its interface connectors, and power supply to AWS. When your package arrives, it will be processed and securely transferred to an AWS data center, where your device will be attached to an AWS Import/Export station. After the data load completes, the device will be returned to you.

For more information, please visit the Amazon Glacier Product Page and the Amazon Glacier Developer Guide.

Building Highly Available, Scalable Web Properties with AWS

From the AWS Webinar Series: Building Highly Available, Scalable Web Properties with AWS 

A very nicely compiled webinar for understand various AWS Services and design principles.

This webinar recording focuses on basic properities for Building Highly Available, Scalable Web Applications on AWS Cloud.

These properties are:

  • Elasticity
  • Design for Failure
  • Loose Coupling
  • Security
  • Performance
 


Wednesday 5 September 2012

AWS Management Console Improvements to EC2 Tab

AWS recently made some improvements to the EC2 tab of the AWS Management Console. It is now easier to access the AWS Marketplace and to configure attached storage (EBS volumes and ephemeral storage) for EC2 instances.

Read on a good post by Jeff.

Marketplace Access

This one is really simple, but definitely worth covering. You can now access the AWS Marketplace from the Launch Instances Wizard:


AWS Marketplace

After you enter your search terms and click the Go button, the Marketplace results page will open in a new tab. Here's what happens when I search for wordpress:

AWS Week in Review - August 4th to August 12th, 2012














Let's take a quick look at what happened in AWS-land last week:
Monday, August 6
Wednesday, August 8
Thursday, August 9

SOURCE

AWS Week in Review - July 30th to August 3rd, 2012


 


Let's take a quick look at what happened in AWS-land last week:
Monday, July 30
Tuesday, July 31
Wednesday, August 1
Friday, August 3

SOURCE

AWS Week in Review - August 13th to August 19th, 2012













Let's take a quick look at what happened in AWS-land last week:
Monday, August 13
Thursday, August 16
Sunday, August 19

SOURCE

AWS Week in Review - August 20th to August 26th , 2012

 


Let's take a quick look at what happened in AWS-land last week:
Monday, August 20
Tuesday, August 21
Wednesday, August 22
Thursday, August 23
Friday, August 24

SOURCE

AWS Week in Review - August 27th to September 2nd , 2012














Let's take a quick look at what happened in AWS-land last week:

Monday, August 27
Friday, August 31

SOURCE


Monday 3 September 2012

Amazon S3 - Cross Origin Resource Sharing Support

GREAT NEWS!!!


AWS has announced support for Cross-Origin Resource Sharing (CORS) in Amazon S3.
You can now easily build web applications that use JavaScript and HTML5 to interact with resources in Amazon S3, enabling you to implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content. Until now, you needed to run a custom proxy server between your web application and Amazon S3 to support these capabilities. A custom proxy server was required because web browsers limit the way web pages loaded from one site (e.g., mywebsite.com) can interact with content from another location (e.g., a location in Amazon S3 like assets.mywebsite.com.s3.amazonaws.com). Amazon S3’s support for CORS replaces the need for this custom proxy server by instructing the web browser to selectively enable these cross-site interactions.
Configuring your bucket for CORS is easy. To get started, open the Amazon S3 Management Console, and follow these simple steps:

1) Right click on your Amazon S3 bucket and open the “Properties” pane.
2) Under the “Permissions” tab, click the “Add CORS configuration” button to add a new CORS configuration. You can then specify the websites (e.g., "mywebsite.com”) that should have access to your bucket, and the specific HTTP request methods (e.g., “GET”) you wish to allow.
3) Click Save.
For more information on using CORS with Amazon S3, review the Amazon S3 Developer Guide.



Tuesday 28 August 2012

AWS Cost Allocation For Customer Bills

A good new feature by AWS to help customers keep control over costs and well put blog by Jeff...



Growth Challenges


You probably know how it goes when you put AWS to work for your company. You start small -- one Amazon S3 bucket for some backups, or one Amazon EC2 instance hosting a single web site or web application. Things work out well and before you know it, word of your success spreads to your team, and they start using it too. At some point the entire company jumps on board, and you become yet another AWS success story.

As your usage of AWS grows, you stop charging it to your personal credit card and create an account for your company. You use IAM to control access to the AWS resources created and referenced by each of the applications.

There's just one catch -- with all of those departments, developers, and applications making use of AWS from a single account, allocating costs to projects and to budgets is difficult because we didn't give you the necessary information. Some of our customers have told us that this cost allocation process can consume several hours of their time each month.

Cost Allocation Via Tagging


Extending the existing EC2 tagging system (keys and values), we are launching a new cost allocation system to make it easy for you to tag your AWS resources and to access billing data that is broken down by tag (or tags).

With this release you can tag the following types of AWS resources for cost allocation purposes:
  • S3 buckets
  • EC2 Instances
  • EBS volumes
  • Reserved Instances
  • Spot Instance requests
  • VPN connections
  • Amazon RDS DB Instances
  • AWS CloudFormation Stacks
Here's all that you need to do:
  1. Decide on Your Tagging Model - Typically, the key name identifies some axis that you care about and the key values identify the points along the axis. You could have a tag named Department, with values like Sales, Marketing, Development, QA, Engineering, and so forth. You could choose to align this with your existing accounting system. You can use multiple tags for cost allocation purposes, each of which represents an additional dimension of usage. If each department runs several AWS-powered applications (or stores lots of data in S3), you could add an Application tag, with the values representing all of the applications that are running on behalf of the department. You can use the tags to create your own custom hierarchy.
  2. Tag Your Resources - Apply the agreed-upon tags to your existing resources, and arrange to apply them to newly created resources as they appear. You can add up to ten tags per resource. You can do this from the AWS Management Console, the service APIs, the command line, or through Auto Scaling:

    AWS Cost Allocation For Customer Bills
    You can use CloudFormation to provision a set of related AWS resources and easily tag them.
  3. Tell AWS Which Tags Matter -Now you need to log in to the AWS Portal, sign up for billing reports, and tell the AWS billing system which tag keys are meaningful for cost allocation purposes by using the Manage Cost Allocation Report option:

    AWS Cost Allocation For Customer Bills - Manage Report
    AWS Cost Allocation For Customer Bills - Select Tags
    You can choose to include certain tags and to exclude others.
  4. Access Billing Data - The estimated billing data is generated multiple times per day and the month-end charges are generated within three days of the end of the month. You can access this data by enabling programmatic access and arranging for it to be delivered to your S3 bucket.

Data Processing


The Cost Allocation Report will contain one additional column for each of the tag keys that you selected in step 3. The corresponding tag value (if any) will be included in the appropriate column of the data:

AWS Cost Allocation For Customer Bills
In the Cost Allocation Report above, the relevant keys were Owner, Stack, Cost Center, Application, and Project. The column will be blank if the AWS resource doesn't happen to have a value for the key. Data transfer and request charges are also included for tagged resources. In effect, these charges inherit the tags from the associated resource.

Once you have this data, you can feed it in to your own accounting system or you can slice and dice it any way you'd like for reporting or visualization purposes. For example, you could create a pivot table and aggregate the data along one or more dimensions:

AWS Cost Allocation For Customer Bills
 

 
 

Monday 27 August 2012

Getting Started with IAM Roles for EC2 Instances



AWS Identity and Access Management (IAM) helps you securely control access to Amazon Web Services and your account resources. IAM can also keep your account credentials private. With IAM, you can create multiple IAM users under the umbrella of your AWS account or enable temporary access through identity federation with your corporate directory. In some cases, you can also enable access to resources across AWS accounts.

Without IAM, however, you must either create multiple AWS accounts—each with its own billing and subscriptions to AWS products—or your employees must share the security credentials of a single AWS account. In addition, without IAM, you cannot control the tasks a particular user or system can do and what AWS resources they might use.


 
 
AWS has recently launched IAM Roles for EC2 Instances. A role is an entity that has a set of permissions that can be assumed by another entity. Use roles to enable applications running on your Amazon EC2 instances to securely access your AWS resources.You grant a specific set of permissions to a role, use the role to launch an EC2 instance, and let EC2 automatically handle AWS credential management for your applications that run on Amazon EC2. Use AWS Identity and Access Management (IAM) to create a role and to grant permissions to the role.
 
 
IAM roles for Amazon EC2 provide:
  
  • AWS access keys for applications running on Amazon EC2 instances
  • Automatic rotation of the AWS access keys on the Amazon EC2 instance
  • Granular permissions for applications running on Amazon EC2 instances that make requests to your AWS services
  
The below video demonstrates basic workflow of:


Create new role AWS IAM Workflow


 

 
 
For more help, refer the AWS documentation for IAM here.
 
For other AWS Documentations, please refer to the quick links provided in the Blogger's right-side panel.
 
 

Infographic: Demystifying AWS - Revealing Behind the scenes usage

Amazon Web Services (AWS) is the biggest public cloud around, yet what goes on behind the scenes remains a mystery.

Read on for a good Infographic by newvem blog !


"For heavy users, such as enterprise level CIOs, AWS’s “Reserved Instances” are a cost effective model to scale their cloud activity and benefit from the full service offering that Amazon provides.


The infographic is based on analysis made by our Reserved Instance Decision Making Tool. This advanced analytics tool can help enterprise CIOs to capture the added value and benefit by:
  • Ensuring that reserved instances meet cost and performance expectations.
  • Identifying consistent onOn-demand Demand usage that can be shifted to reserved Reserved instances.
  • Tracking Reserved Instance expiration dates and recommend actions for renewal and scale up and down.



SOURCE






 

Friday 24 August 2012

AWS New Whitepaper: Mapping and GeoSpatial Analysis in the Cloud Using ArcGIS

Great new whitepaper by Jinesh Varia...


Esri is one of the leaders in the Geographic Information Systems (GIS) industry and one of the largest privately held software companies focused on mapping and geospatial applications in the world with offices in more than 100 countries. Both public and private sector organizations use Esri technology to analyze and manage their geographic information and make better decisions – uses range from planning cities and improving the quality of life for residents, to site selection, customer analytics, and streamlining logistics.

Esri and AWS have been working together since 2008 to bring the power of GIS to the masses. The AWS Partner Team recently attended the 2012 Esri International User Conference with over 14,000+ attendees, 300 exhibitors and a large number of ecosystem partners. A cloud computing theme dominated the conference.
Esri and AWS have co-authored a whitepaper, "Mapping and GeoSpatial Analysis Using ArcGIS", to provide users who have interest in performing spatial analysis using their data with complimentary datasets.

The paper discusses how users can publish and analyze imagery data (such as satellite imagery, or aerial imagery) and create and publish tile cache map services from spatially referenced data (such as data with x/y points, lines, polygons) in AWS using ArcGIS.

Download PDF: Mapping and GeoSpatial Analysis Using ArcGIS

ArcGIS_AWSThe paper focuses on imagery because that has been the most challenging data type to manage in the cloud, but the approaches discussed are general enough to apply to any type of data.

It not only provides architecture guidance on how to scale ArcGIS servers in the cloud but also provides step-by-step guidance on publishing map services in the cloud.

For more information on GeoApps in the AWS Cloud, see the presentation -
The Cloud as a Platform for Geo below:
GeoApps in the AWS Cloud - Jinesh Varia from Amazon Web Services


SOURCE
 

Tuesday 21 August 2012

Deploy a .NET Application to AWS Elastic Beanstalk with Amazon RDS Using Visual Studio

In this video, walk you through deploying an application to AWS Elastic Beanstalk (link: http://aws.amazon.com/elasticbeanstalk/), configuring an Amazon RDS for SQL Server DB instance (link: http://aws.amazon.com/rds/), and managing your configuration, all from the confines of Visual Studio. The AWS Toolkit for Visual Studio streamlines your development, deployment, and testing inside your familiar IDE.
To learn more about AWS Elastic Beanstalk and Amazon RDS, visit the AWS Elastic Beanstalk Developer Guide at http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_NE....



SOURCE



AWS Direct Connect - New Locations and Console Support

On 13th August AWS has announced new locations and console support for AWS Direct Connect. Great article by Jeff...

Did you know that you can use AWS Direct Connect to set up a dedicated 1 Gbps or 10 Gbps network connect from your existing data center or corporate office to AWS?

New Locations

Today we are adding two additional Direct Connect locations so that you have even more ways to reduce your network costs and increase network bandwidth throughput. You also have the potential for a more consistent experience. Here is the complete list of locations:
If you have your own equipment running at one of the locations listed above, you can use Direct Connect to optimize the connection to AWS. If your equipment is located somewhere else, you can work with one of our APN Partners supporting Direct Connect to establish a connection from your location to a Direct Connection Location, and from there on to AWS.

Console Support

Up until now, you needed to fill in a web form to initiate the process of setting up a connection. In order to make the process simpler and smoother, you can now start the ordering process and manage your Connections through the AWS Management Console.
Here's a tour. You can establish a new connection by selecting the Direct Connect tab in the console:

AWS Direct connect Establish a new connection
 
After you confirm your choices you can place your order with one final click:

AWS Direct connect Establish a new connection
 
You can see all of your connections in a single (global) list:

AWS Direct connect connections
 
You can inspect the details of each connection:

AWS Direct connect - connection details
 
You can then create a Virtual Interface to your connection. The interface can connected to one of your Virtual Private Clouds or it can connect to the full set of AWS services:

AWS Direct connect

AWS Direct connect
 
You can even download a router configuration file tailored to the brand, model, and version of your router:

AWS Direct connect
 
Get Connected
And there you have it! Learn more about AWS Direct Connect and get started today.

SOURCE
 

ALL about AWS EBS Provisioned IOPS - feature and resources

AWS has recently announced EBS Provisioned IOPS feature, a new Elastic Block Store volume type for running high performance databases in the cloud. Provisioned IOPS are designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, you can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume.
Simple comparison between the standard volumes and provisioned IOPS volumes:

Amazon EBS Standard volumes
  • Offer cost effective storage for applications with moderate or bursty I/O requirements.
  • Deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS.
  • Are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.
  • $0.10 per GB-month of provisioned storage
  • $0.10 per 1 million I/O requests

Amazon EBS Provisioned IOPS volumes
  • Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases.
  • Amazon EBS currently supports up to 1000 IOPS per Provisioned IOPS volume, with higher limits coming soon.
  • Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.
  • $0.125 per GB-month of provisioned storage
  • $0.10 per provisioned IOPS-month
AWS has compiled some interesting resources for the users:

Our recent release of the EBS Provisioned IOPS feature (blog post, explanatory video, EBS home page) has met with a very warm reception. Developers all over the world are already making use of this important and powerful new EC2 feature. I would like to make you aware of some other new resources and blog posts to help you get the most from your EBS volumes.
  • The EC2 FAQ includes answers to a number of important performance and architecture questions about Provisioned IOPS.
  • The EC2 API tools have been updated and now support the creation of Provisioned IOPS volumes. The newest version of the ec2-create-volume tool supports the --type and --iops options. For example, the following command will create a 500 GB volume with 1000 Provisioned IOPS:
    $ ec2-create-volume --size 500 --availability-zone us-east-1b --type io1 --iops 1000
  • Eric Hammond has written a detailed migraton guide to show you how to convert a running EC2 instance to an EBS-Optimized EC2 instance with Provisioned IOPS volumes. It is a very handy post, and it also shows off the power of programmatic infrastructure.
  • I have been asked about the applicability of existing EC2 Reserved Instances to the new EBS-Optimized instances. Yes, they apply, and you pay only the additional hourly charge. Read our new FAQ entry to learn more.
  • I have also been asked about the availability of EBS-Optimized instances for more instance types. We intend to support other instance types based on demand. Please feel free to let us know what you need by posting a comment on this blog or in the EC2 forum.
  • The folks at CloudVertical have written a guide to understanding new AWS I/O options and costs.
  • The team at Stratalux wrote a very informative blog post, Putting Amazon's Provisoned IOPS to the Test. Their conclusion:
    "Based upon our tests PIOPS definitely provides much needed and much sought after performance improvements over standard EBS volumes. I’m glad to see that Amazon has heeded the calls of its customers and developed a persistent storage solution optimized for database workloads."
EBS Provisioned IOPS We have also put together a new guide to benchmarking provisioned IOPS volumes. The guide shows you how to set up and run high-quality, repeatable benchmarks on Linux and Windows using the fio, Oracle Orion, and SQLIO tools. The guide will walk you through the following steps:
  • Launching an EC2 instance.
  • Creating Provisioned IOPS EBS volumes.
  • Attaching the volumes to the instance.
  • Creating a RAID from the volumes.
  • Installing the appropriate benchmark tool.
  • Benchmarking the I/O performance of your volumes.
  • Deleting the volumnes and terminate the instance.
Since I like to try things for myself, I created six 100 GB volumes, each provisioned for 1000 IOPS:
EBS Provisioned IOPS
Then I booted up an EBS-Optimized EC2 instance, built a RAID, and ran fio. Here's what I saw in the AWS Management Console's CloudWatch charts after the run. Each volume was delivering 1000 IOPS, as provisioned:
EBS Provisioned IOPS
Here's an excerpt from the results:
fio_test_file: (groupid=0, jobs=32): err= 0: pid=23549: Mon Aug 6 14:01:14 2012
read : io=123240MB, bw=94814KB/s, iops=5925 , runt=1331000msec
clat (usec): min=356 , max=291546 , avg=5391.52, stdev=8448.68
lat (usec): min=357 , max=291547 , avg=5392.91, stdev=8448.68
clat percentiles (usec):
| 1.00th=[ 418], 5.00th=[ 450], 10.00th=[ 478], 20.00th=[ 548],
| 30.00th=[ 596], 40.00th=[ 668], 50.00th=[ 892], 60.00th=[ 1160],
| 70.00th=[ 3152], 80.00th=[10432], 90.00th=[20864], 95.00th=[26752],
| 99.00th=[29824], 99.50th=[30336], 99.90th=[31360], 99.95th=[31872],
| 99.99th=[37120]
Read the benchmarking guide to learn more about running the benchmarks and interpreting the results.

SOURCE for resources : http://aws.typepad.com/aws/2012/08/ebs-provisioned-iops-some-interesting-resources.html

Announcing AWS Elastic Beanstalk support for Python, and seamless database integration


It’s a good day to be a Python developer: AWS Elastic Beanstalk now supports Python applications! If you’re not familiar with Elastic Beanstalk, it’s the easiest way to deploy and manage scalable PHP, Java, .NET, and now Python applications on AWS. You simply upload your application, and Elastic Beanstalk automatically handles all of the details associated with deployment including provisioning of Amazon EC2 instances, load balancing, auto scaling, and application health monitoring.

Elastic Beanstalk supports Python applications that run on the familiar Apache HTTP server and WSGI. In other words, you can run any Python applications, including your Django applications, or your Flask applications. Elastic Beanstalk supports a rich set of tools to help you develop faster. You can use eb and Git to quickly develop and deploy from the command line. You can also use the AWS Management Console to manage your application and configuration.

The Python release brings with it many platform improvements to help you get your application up and running more quickly and securely. Here are a few of the highlights below:

Integration with Amazon RDS

Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud, making it a great fit for scalable web applications running on Elastic Beanstalk.

If your application requires a relational database, Elastic Beanstalk can create an Amazon RDS database instance to use with your application. The RDS database instance is automatically configured to communicate with the Amazon EC2 instances running your application.
 
AWS RDS Configuration Details

A console screenshot showing RDS configuration options when launching a newAWS Elastic Beanstalk environment.

Once the RDS database instance is provisioned, you can retrieve information about the database from your application using environment variables:



import os
if 'RDS_HOSTNAME' in os.environ:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': os.environ['RDS_DB_NAME'],
'USER': os.environ['RDS_USER'],
'PASSWORD': os.environ['RDS_PASSWORD'],
'HOST': os.environ['RDS_HOSTNAME'],
'PORT': os.environ['RDS_PORT'],
}
}


To learn more about using Amazon RDS with Elastic Beanstalk, visit “Using Amazon RDS with Python” in the Developer Guide.

Customize your Python Environment
You can customize the Python runtime for Elastic Beanstalk using a set of declarative text files within your application. If your application contains a requirements.txt in its top level directory, Elastic Beanstalk will automatically install the dependencies using pip.

Elastic Beanstalk is also introducing a new configuration mechanism that allows you to install packages from yum, run setup scripts, and set environment variables. You simply create a “.ebextensions” directory inside your application and add a “python.config” file in it. Elastic Beanstalk loads this configuration file and installs the yum packages, runs any scripts, and then sets environment variables. Here is a sample configuration file that syncs the database for a Django application:


commands:
syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
"aws:elasticbeanstalk:application:python:environment":
DJANGO_SETTINGS_MODULE: "mysite.settings"
"aws:elasticbeanstalk:container:python":
WSGIPath: "mysite/wsgi.py"


Snapshot your logs

To help debug problems, you can easily take a snapshot of your logs from the AWS Management console. Elastic Beanstalk aggregates the top 100 lines from many different logs, including the Apache error log, to help you squash those bugs.
 
Elastic Beanstalk Console-snapshot-logs

The snapshot is saved to S3 and is automatically deleted after 15 minutes. Elastic Beanstalk can also automatically rotate the log files to Amazon S3 on an hourly basis so you can analyze traffic patterns and identify issues. To learn more, visit “Working with Logs” in the Developer Guide.

Support for Django and Flask

Using the customization mechanism above, you can easily deploy and run your Django and Flask applications on Elastic Beanstalk.
For more information about using Python and Elastic Beanstalk, visit the Developer Guide.