Thursday, 30 August 2012

Going to the Cloud - Cloud in Education

A very good Infographic that looks at how schools and colleges are adopting 'the cloud' and how Adobe, IBM, Microsoft, and Google are responding with their respective cloud suites for educators.

Going to the Cloud




Wednesday, 29 August 2012

What's new in VMware vSphere 5.1

Today VMware announced vSphere 5.1. This posting will give an overview of the most interesting new features.

vSphere 5.1 will be available September 11 2012 !!

Some highlights are as follows:

  • Paul Maritz steps down as CEO after leading the company for 4 years. His successor is Pat Gelsinger
  • VMware is focused on building the architecture for Cloud Computing which is called the Software Defined Datacenter
  • vCloud Suite is announced, consisting of:
    • vSphere
    • vCloud Director
    • vCloud Networking and Security
    • Site Recovery Manager
    • vCenter Operations Manager
    • vFabric Application Director
    • vCloud API’s
    • vCloud Connector
    • vCenter Orchestrator

  • vSphere 5.1 is announced
  • vCloud Director 5.1 is announced
  • vCloud Networking and Security 5.1 is announced
  • vCenter Site Recovery Manager 5.1 is announced
  • vRAM is no more! VMware will use a priced per CPU model
  • Cloud Ops, a new operating model for IT
  • Monster VMs will get bigger: 64 virtual CPUs and 1 million IOPS...per VM
  • Enhanced vMotion: Live migration without the need of shared storage!
  • New virtualized storage options
  • Create secure and logical networks using the new vCloud Networking & Security suite and VXLAN
  • vSphere 5.1 contains a full featured browser based vSphere Client, the Web Client
  • The vCloud Director interface is now vSphere Web Client style
  • The vSphere Web Client now offers great extensibility options for 3rd party vendors
  • Use vFabric Application Director for deploying complex applications
  • Existing vSphere Enterprise Plus customers will get a free upgrade to the vCloud Suite
  • VMware recently acquired Nicira, a company that virtualizes networking

  • More detailed information on all these announcements follow below:

    VMware changed the features in the vSphere editions. The features below all are available now in Standard Edition as well.

    SURPRISE !! VMware Will Join OpenStack

    Never say never. VMware is about to join the OpenStack Foundation, a group initially backed by other industry giants as a counterweight to VMware’s server virtualization dominance. Intel and NEC are also on deck to join as Gold OSF members.


    Just in time for VMworld, VMware is about to join the OpenStack Foundation as a Gold member, along with Intel and NEC, according to a post on the OpenStack Foundation Wiki. The applications for membership are on the agenda of the August 28 OpenStack Foundation meeting.

    A year ago, a VMware-OpenStack hookup would have been seen as unlikely. When Rackspace and NASA launched the OpenStack Project more than two years ago, it was seen as a competitive response to VMware’s server virtualization dominance inside company data centers and to Amazon’s heft in public cloud computing. Many tech companies including but not limited to Rackspace, IBM, Hewlett-Packard, Citrix, Red Hat and Microsoft saw VMware as a threat and were bound and determined to keep the company from extending its virtualization lock into the cloud.

    But, things change. VMware’s surprise acquisition of Nicira and DynamicOps last month, showed there might be a thaw in the air. For one thing, Nicira is an OpenStack player. By bringing Nicira and DynamicOps into the fold, VMware appeared to be much more willing to work with non-VMware-centric infrastructure, as GigaOM’s Derrick Harris reported at the time.

    This is a symbolic coup for OpenStack and its biggest boost since IBM and Red Hat officially joined as Platinum members in April. And it’s especially important since Citrix, a virtualization rival to VMware undercut it’s own OpenStack participation last April by pushing CloudStack as an alternative open source cloud stack.

    OpenStack Gold members, which include Cloudscaling, Dell, MorphLabs, Cisco Systems, and NetApp, pay a fee pegged at 0.25 percent of their revenue — at least $50,000 but capped at $200,000 according to the foundation wiki. (VMware’s fee will be $66,666, according to the application, submitted by VMware CTO Steve Herrod, which is linked on the wiki post.) Platinum members — AT&T, Canonical, HP, Rackspace, IBM, Nebula, Red Hat, and SUSE – pay $500,000 per year with a 3-year minimum commitment.


    Original Source :

    Introduction to Virtualisation - VMware

    This video webcast is designed to help those with little to no virtualization experience understand why virtualization and VMware are so important to driving down both capital and operational costs

    Introduction to Virtualisation - VMware

    View the slides here

    SOURCE : Infoworld Newsletter


    Tuesday, 28 August 2012

    AWS Cost Allocation For Customer Bills

    A good new feature by AWS to help customers keep control over costs and well put blog by Jeff...

    Growth Challenges

    You probably know how it goes when you put AWS to work for your company. You start small -- one Amazon S3 bucket for some backups, or one Amazon EC2 instance hosting a single web site or web application. Things work out well and before you know it, word of your success spreads to your team, and they start using it too. At some point the entire company jumps on board, and you become yet another AWS success story.

    As your usage of AWS grows, you stop charging it to your personal credit card and create an account for your company. You use IAM to control access to the AWS resources created and referenced by each of the applications.

    There's just one catch -- with all of those departments, developers, and applications making use of AWS from a single account, allocating costs to projects and to budgets is difficult because we didn't give you the necessary information. Some of our customers have told us that this cost allocation process can consume several hours of their time each month.

    Cost Allocation Via Tagging

    Extending the existing EC2 tagging system (keys and values), we are launching a new cost allocation system to make it easy for you to tag your AWS resources and to access billing data that is broken down by tag (or tags).

    With this release you can tag the following types of AWS resources for cost allocation purposes:
    • S3 buckets
    • EC2 Instances
    • EBS volumes
    • Reserved Instances
    • Spot Instance requests
    • VPN connections
    • Amazon RDS DB Instances
    • AWS CloudFormation Stacks
    Here's all that you need to do:
    1. Decide on Your Tagging Model - Typically, the key name identifies some axis that you care about and the key values identify the points along the axis. You could have a tag named Department, with values like Sales, Marketing, Development, QA, Engineering, and so forth. You could choose to align this with your existing accounting system. You can use multiple tags for cost allocation purposes, each of which represents an additional dimension of usage. If each department runs several AWS-powered applications (or stores lots of data in S3), you could add an Application tag, with the values representing all of the applications that are running on behalf of the department. You can use the tags to create your own custom hierarchy.
    2. Tag Your Resources - Apply the agreed-upon tags to your existing resources, and arrange to apply them to newly created resources as they appear. You can add up to ten tags per resource. You can do this from the AWS Management Console, the service APIs, the command line, or through Auto Scaling:

      AWS Cost Allocation For Customer Bills
      You can use CloudFormation to provision a set of related AWS resources and easily tag them.
    3. Tell AWS Which Tags Matter -Now you need to log in to the AWS Portal, sign up for billing reports, and tell the AWS billing system which tag keys are meaningful for cost allocation purposes by using the Manage Cost Allocation Report option:

      AWS Cost Allocation For Customer Bills - Manage Report
      AWS Cost Allocation For Customer Bills - Select Tags
      You can choose to include certain tags and to exclude others.
    4. Access Billing Data - The estimated billing data is generated multiple times per day and the month-end charges are generated within three days of the end of the month. You can access this data by enabling programmatic access and arranging for it to be delivered to your S3 bucket.

    Data Processing

    The Cost Allocation Report will contain one additional column for each of the tag keys that you selected in step 3. The corresponding tag value (if any) will be included in the appropriate column of the data:

    AWS Cost Allocation For Customer Bills
    In the Cost Allocation Report above, the relevant keys were Owner, Stack, Cost Center, Application, and Project. The column will be blank if the AWS resource doesn't happen to have a value for the key. Data transfer and request charges are also included for tagged resources. In effect, these charges inherit the tags from the associated resource.

    Once you have this data, you can feed it in to your own accounting system or you can slice and dice it any way you'd like for reporting or visualization purposes. For example, you could create a pivot table and aggregate the data along one or more dimensions:

    AWS Cost Allocation For Customer Bills


    Identifying Workloads for the Cloud

    Superb Information by Rightscale... Read on...

    Identifying workloads to move to the cloud can be tricky. You have dozens or hundreds of apps running in your organization, and now that you’ve seen the operational efficiencies and agility available to you in the cloud, you’re tempted to move as many of them to the cloud as quickly as possible. As you’ll see in the examples below, cloud computing is indeed a good fit for many common workloads.

    I firmly believe that infrastructure-as-a-service (IaaS) cloud is for every organization, but not for every application. The reality is that some applications just aren’t a good fit for the ephemeral and dynamic environment of the cloud. Still others have very specific environmental requirements that make them ill suited. Read on as I explore more about what you should consider before earmarking a workload for the cloud.

    3 Quick Criteria for a Good Fit

    While each application is unique, and it’s important to apply your own lens when evaluating your cloud strategy, there are some rules of thumb that should help identify applications that are winning choices for cloud:

    Unpredictable load or potential for explosive growth: Whenever your app is public facing, it has the potential to be wildly popular. Social games, eCommerce sites, blogs and software-as-a-service (SaaS) products fall into this category. If you release the next Farmville™ and your traffic spikes, you can scale up and down in the cloud according to demand, avoiding a “success disaster” and never over-provisioning your infrastructure.

    Partial utilization: When traffic fluctuates – say with daily cycles of playing or shopping, or with occasional, compute-intensive batch processing – you can spin up extra servers in the cloud during the peaks and spin them down afterwards.

    Easy parallelization: Applications like media streaming can be scaled horizontally and are generally a good use case for the cloud, because they scale out rather than up.
    Finally, keep in mind the ideal of cloud computing as a way of using multiple resource pools – public cloud, private cloud, hybrid, your internal data center – not choosing one over the others. RightScale lets you see and manage all of them through one interface with a single set of tools and best practices.

    3 Ideal Cloud Workloads

    INFOGRAPHIC- Is The Future Of Cloud Computing Open Source? A Few Things To Consider

    Companies are embracing cloud computing solutions because of their flexibility, scalability and cost-effectiveness, and those who have successfully integrated the cloud into their infrastructure have found it quite economic. They can expand and contract, and add and remove services as per requirement, giving them a lot of control over the resources being used and the funds being spent on those resources. This highly controllable environment not only cuts the costs of services, but also saves funds that are spent on the infrastructure of the company.

    Replacement of Personal Computers with Personal Clouds

    Cloud computing is not only becoming popular in business, but also among individual consumers. With the passage of time, personal computers are being replaced by personal clouds, and more and more companies are offering personal cloud services. People prefer to store their images, videos and documents online, both as a backup and to make them secure. Storing data on personal clouds makes it available anytime, anywhere. You just need a computing device and an Internet connection, and you can access all your photos, videos and documents.

    Stability, Scalability and Reliability of Open-Source Software

    Open-source software is becoming popular on an enterprise level because of its stability, scalability and reliability. Companies love to use open-source technologies because they are highly customizable, secure, reliable and accountable. With proprietary software, we are highly dependent on the software company for its development and support. But for open-source, we can find huge support from developers across the world, and we can tweak it according to our needs. Just hire a team of developers, and there you go.

    Lessons Learned from Linux and Android

    Monday, 27 August 2012

    Getting Started with IAM Roles for EC2 Instances

    AWS Identity and Access Management (IAM) helps you securely control access to Amazon Web Services and your account resources. IAM can also keep your account credentials private. With IAM, you can create multiple IAM users under the umbrella of your AWS account or enable temporary access through identity federation with your corporate directory. In some cases, you can also enable access to resources across AWS accounts.

    Without IAM, however, you must either create multiple AWS accounts—each with its own billing and subscriptions to AWS products—or your employees must share the security credentials of a single AWS account. In addition, without IAM, you cannot control the tasks a particular user or system can do and what AWS resources they might use.

    AWS has recently launched IAM Roles for EC2 Instances. A role is an entity that has a set of permissions that can be assumed by another entity. Use roles to enable applications running on your Amazon EC2 instances to securely access your AWS resources.You grant a specific set of permissions to a role, use the role to launch an EC2 instance, and let EC2 automatically handle AWS credential management for your applications that run on Amazon EC2. Use AWS Identity and Access Management (IAM) to create a role and to grant permissions to the role.
    IAM roles for Amazon EC2 provide:
    • AWS access keys for applications running on Amazon EC2 instances
    • Automatic rotation of the AWS access keys on the Amazon EC2 instance
    • Granular permissions for applications running on Amazon EC2 instances that make requests to your AWS services
    The below video demonstrates basic workflow of:

    Create new role AWS IAM Workflow


    For more help, refer the AWS documentation for IAM here.
    For other AWS Documentations, please refer to the quick links provided in the Blogger's right-side panel.

    Infographic : Evolution of Computer Languages

    All the cloud applications you use on the Internet today are written in a specific computer language. What you see as a nice icon on the front end looks like a bunch of code on the back end. It’s interesting to see where computer languages started and how they have evolved over time. There are now a series of computer languages to choose from and billions lines of code. Check out the infographic below to see the computer language timeline and read some fun facts about code along the way.


    Infographic: Demystifying AWS - Revealing Behind the scenes usage

    Amazon Web Services (AWS) is the biggest public cloud around, yet what goes on behind the scenes remains a mystery.

    Read on for a good Infographic by newvem blog !

    "For heavy users, such as enterprise level CIOs, AWS’s “Reserved Instances” are a cost effective model to scale their cloud activity and benefit from the full service offering that Amazon provides.

    The infographic is based on analysis made by our Reserved Instance Decision Making Tool. This advanced analytics tool can help enterprise CIOs to capture the added value and benefit by:
    • Ensuring that reserved instances meet cost and performance expectations.
    • Identifying consistent onOn-demand Demand usage that can be shifted to reserved Reserved instances.
    • Tracking Reserved Instance expiration dates and recommend actions for renewal and scale up and down.



    Friday, 24 August 2012

    AWS New Whitepaper: Mapping and GeoSpatial Analysis in the Cloud Using ArcGIS

    Great new whitepaper by Jinesh Varia...

    Esri is one of the leaders in the Geographic Information Systems (GIS) industry and one of the largest privately held software companies focused on mapping and geospatial applications in the world with offices in more than 100 countries. Both public and private sector organizations use Esri technology to analyze and manage their geographic information and make better decisions – uses range from planning cities and improving the quality of life for residents, to site selection, customer analytics, and streamlining logistics.

    Esri and AWS have been working together since 2008 to bring the power of GIS to the masses. The AWS Partner Team recently attended the 2012 Esri International User Conference with over 14,000+ attendees, 300 exhibitors and a large number of ecosystem partners. A cloud computing theme dominated the conference.
    Esri and AWS have co-authored a whitepaper, "Mapping and GeoSpatial Analysis Using ArcGIS", to provide users who have interest in performing spatial analysis using their data with complimentary datasets.

    The paper discusses how users can publish and analyze imagery data (such as satellite imagery, or aerial imagery) and create and publish tile cache map services from spatially referenced data (such as data with x/y points, lines, polygons) in AWS using ArcGIS.

    Download PDF: Mapping and GeoSpatial Analysis Using ArcGIS

    ArcGIS_AWSThe paper focuses on imagery because that has been the most challenging data type to manage in the cloud, but the approaches discussed are general enough to apply to any type of data.

    It not only provides architecture guidance on how to scale ArcGIS servers in the cloud but also provides step-by-step guidance on publishing map services in the cloud.

    For more information on GeoApps in the AWS Cloud, see the presentation -
    The Cloud as a Platform for Geo below:
    GeoApps in the AWS Cloud - Jinesh Varia from Amazon Web Services


    Wednesday, 22 August 2012

    Automating Linux Installation and configuration with Kickstart

    Automating Linux Installation and configuration with Kickstart

    If you are working for an IT Support company means you regularly have to install OSs like CentOS, Fedora & Redhat on servers, desktop computers or even Virtual Machines.
    Following this guide will explain you how to automate the install process using a simple Kickstart file.

    Read more for the very well explained guide here.

    Tuesday, 21 August 2012

    Deploy a .NET Application to AWS Elastic Beanstalk with Amazon RDS Using Visual Studio

    In this video, walk you through deploying an application to AWS Elastic Beanstalk (link:, configuring an Amazon RDS for SQL Server DB instance (link:, and managing your configuration, all from the confines of Visual Studio. The AWS Toolkit for Visual Studio streamlines your development, deployment, and testing inside your familiar IDE.
    To learn more about AWS Elastic Beanstalk and Amazon RDS, visit the AWS Elastic Beanstalk Developer Guide at


    Amazon CloudSearch - Start Searching in One Hour for Less Than $100 / Month

    Extract from Amazon Web Service Evangelist Jeff Barr's CloudSearch blog post for more information about how you can start searching in an hour for less than $100 a month...

    Continuing along in our quest to give you the tools that you need to build ridiculously powerful web sites and applications in no time flat at the lowest possible cost, I'd like to introduce you to Amazon CloudSearch. If you have ever searched, you've already used the technology that underlies CloudSearch. You can now have a very powerful and scalable search system (indexing and retrieval) up and running in less than an hour.

    You, sitting in your corporate cubicle, your coffee shop, or your dorm room, now have access to search technology at a very affordable price. You can start to take advantage of many years of Amazon R&D in the search space for just $0.12 per hour (I'll talk about pricing in depth later).

    What is Search?

    Search plays a major role in many web sites and other types of online applications. The basic model is seemingly simple. Think of your set of documents or your data collection as a book or a catalog, composed of a number of pages. You know that you can find the desired content quickly and efficiently by simply consulting the index.

    Search does the same thing by indexing each document in a way that facilitates rapid retrieval. You enter some terms into a search box and the site responds (rather quickly if you use CloudSearch) with a list of pages that match the search terms.

    As is the case with many things, this simple model masks a lot of complexity and might raise a lot of questions in your mind. For example:
    1. How efficient is the search? Did the search engine simply iterate through every page, looking for matches, or is there some sort of index?
    2. The search results were returned in the form of an ordered list. What factor(s) determined which documents were returned, and in what order (commonly known as ranking)? How are the results grouped?
    3. How forgiving or expansive was the search? Did a search for "dogs" return results for "dog?" Did it return results for "golden retriever," or "pet?"
    4. What kinds of complex searches or queries can be used? Does the result for "dog training" return the expected results. Can you search for "dog" in the Title field and "training" in the Description?
    5. How scalable is the search? What if there are millions or billions of pages? What if there are thousands of searches per hour? Is there enough storage space?
    6. What happens when new pages are added to the collection, or old pages are removed? How does this affect the search results?
    7. How can you efficiently navigate through and explore search results? Can you group and filter the search results in ways that take advantage of multiple named fields (often known as a faceted search).
    Needless to say, things can get very complex very quickly. Even if you can write code to do some or all of this yourself, you still need to worry about the operational aspects. We know that scaling a search system is non-trivial. There are lots of moving parts, all of which must be designed, implemented, instantiated, scaled, monitored, and maintained. As you scale, algorithmic complexity often comes in to play; you soon learn that algorithms and techniques which were practical at the beginning aren't always practical at scale.

    What is Amazon CloudSearch?

    Amazon CloudSearch is a fully managed search service in the cloud. You can set it up and start processing queries in less than an hour, with automatic scaling for data and search traffic, all for less than $100 per month.

    CloudSearch hides all of the complexity and all of the search infrastructure from you. You simply provide it with a set of documents and decide how you would like to incorporate search into your application.

    You don't have to write your own indexing, query parsing, query processing, results handling, or any of that other stuff. You don't need to worry about running out of disk space or processing power, and you don't need to keep rewriting your code to add more features.

    With CloudSearch, you can focus on your application layer. You upload your documents, CloudSearch indexes them, and you can build a search experience that is custom-tailored to the needs of your customers.

    How Does it Work?

    The Amazon CloudSearch model is really simple, but don't confuse simple, with simplistic -- there's a lot going on behind the scenes!

    Here's all you need to do to get started (you can perform these operations from the AWS Management Console, the CloudSearch command line tools, or through the CloudSearch APIs):
    1. Create and configure a Search Domain. This is a data container and a related set of services. It exists within a particular Availability Zone of a single AWS Region (initially US East).
    2. Upload your documents. Documents can be uploaded as JSON or XML that conforms to our Search Document Format (SDF). Uploaded documents will typically be searchable within seconds.  You can, if you'd like, send data over an HTTPS connection to protect it while it is transit.
    3. Perform searches.
    There are plenty of options and goodies, but that's all it takes to get started.

    Amazon CloudSearch applies data updates continuously, so newly changed data becomes searchable in near real-time. Your index is stored in RAM to keep throughput high and to speed up document updates. You can also tell CloudSearch to re-index your documents; you'll need to do this after changing certain configuration options, such as stemming (converting variations of a word to a base word, such as "dogs" to "dog") or stop words (very common words that you don't want to index).
    Amazon CloudSearch has a number of advanced search capabilities including faceting and fielded search:

    Faceting allows you to categorize your results into sub-groups, which can be used as the basis for another search. You could search for "umbrellas" and use a facet to group the results by price, such as $1-$10, $10-$20, $20-$50, and so forth. CloudSearch will even return document counts for each sub-group.
    Fielded searching allows you to search on a particular attribute of a document. You could locate movies in a particular genre or actor, or products within a certain price range.

    Search Scaling
    Behind the scenes, CloudSearch stores data and processes searches using search instances. Each instance has a finite amount of CPU power and RAM. As your data expands, CloudSearch will automatically launch additional search instances and/or scale to larger instance types. As your search traffic expands beyond the capacity of a single instance, CloudSearch will automatically launch additional instances and replicate the data to the new instance. If you have a lot of data and a high request rate, CloudSearch will automatically scale in both dimensions for you.

    Amazon CloudSearch will automatically scale your search fleet up to a maximum of 50 search instances. We'll be increasing this limit over time; if you have an immediate need for more than 50 instances, please feel free to contact us and we'll be happy to help.

    The net-net of all of this automation is that you don't need to worry about having enough storage capacity or processing power. CloudSearch will take care of it for you, and you'll pay only for what you use.

    Pricing Model

    The Amazon CloudSearch pricing model is straightforward:

    You'll be billed based on the number of running search instances. There are three search instance sizes (Small, Large, and Extra Large) at prices ranging from $0.12 to $0.68 per hour (these are US East Region prices, since that's where we are launching CloudSearch).

    There's a modest charge for each batch of uploaded data. If you change configuration options and need to re-index your data, you will be billed $0.98 for each Gigabyte of data in the search domain.
    There's no charge for in-bound data transfer, data transfer out is billed at the usual AWS rates, and you can transfer data to and from your Amazon EC2 instances in the Region at no charge.

    Advanced Searching

    Like the other Amazon Web Services, CloudSearch allows you to get started with a modest effort and to add richness and complexity over time. You can easily implement advanced features such as faceted search, free text search, Boolean search expressions, customized relevance ranking, field-based sorting and searching, and text processing options such as stopwords, synonyms, and stemming.

    CloudSearch Programming

    You can interact with CloudSearch through the AWS Management Console, a complete set of Amazon CloudSearch APIs, and a set of command line tools. You can easily create, configure, and populate a search domain through the AWS Management Console.
    Here's a tour, starting with the welcome screen:

    Amazon CloudSearch
    You start by creating a new Search Domain:

    Amazon CloudSearch
    You can then load some sample data. It can come from local files, an Amazon S3 bucket, or several other sources:

    Amazon CloudSearch
    Here's how you choose an S3 bucket (and an optional prefix to limit which documents will be indexed):

    Amazon CloudSearch
    You can also configure your initial set of index fields:

    Amazon CloudSearch
    You can also create access policies for the CloudSeach APIs:

    Amazon CloudSearch
    Your search domain will be initialized and ready to use within twenty minutes:

    Amazon CloudSearch
    Processing your documents is the final step in the initialization process:

    Amazon CloudSearch
    After your documents have been processed you can perform some test searches from the console:

    Amazon CloudSearch
    The CloudSearch console also provides you with full control over a number of indexing options including stopwords, stemming, and synonyms:

    CloudSearch in Action
    Some of our early customers have already deployed some applications powered by CloudSearch. Here's a sampling:
    • Search Technologies has used CloudSearch to index the Wikipedia (see the demo).
    • NewsRight is using CloudSearch to deliver search for news content, usage and rights information to over 1,000 publications.
    • is using CloudSearch to power their social music discovery website.
    • CarDomain is powering search on their social networking website for car enthusiasts.
    • Sage Bionetworks is powering search on their data-driven collaborative biological research website.
    • Smugmug is using CloudSearch to deliver search on their website for over a billion photos.


      AWS Direct Connect - New Locations and Console Support

      On 13th August AWS has announced new locations and console support for AWS Direct Connect. Great article by Jeff...

      Did you know that you can use AWS Direct Connect to set up a dedicated 1 Gbps or 10 Gbps network connect from your existing data center or corporate office to AWS?

      New Locations

      Today we are adding two additional Direct Connect locations so that you have even more ways to reduce your network costs and increase network bandwidth throughput. You also have the potential for a more consistent experience. Here is the complete list of locations:
      If you have your own equipment running at one of the locations listed above, you can use Direct Connect to optimize the connection to AWS. If your equipment is located somewhere else, you can work with one of our APN Partners supporting Direct Connect to establish a connection from your location to a Direct Connection Location, and from there on to AWS.

      Console Support

      Up until now, you needed to fill in a web form to initiate the process of setting up a connection. In order to make the process simpler and smoother, you can now start the ordering process and manage your Connections through the AWS Management Console.
      Here's a tour. You can establish a new connection by selecting the Direct Connect tab in the console:

      AWS Direct connect Establish a new connection
      After you confirm your choices you can place your order with one final click:

      AWS Direct connect Establish a new connection
      You can see all of your connections in a single (global) list:

      AWS Direct connect connections
      You can inspect the details of each connection:

      AWS Direct connect - connection details
      You can then create a Virtual Interface to your connection. The interface can connected to one of your Virtual Private Clouds or it can connect to the full set of AWS services:

      AWS Direct connect

      AWS Direct connect
      You can even download a router configuration file tailored to the brand, model, and version of your router:

      AWS Direct connect
      Get Connected
      And there you have it! Learn more about AWS Direct Connect and get started today.


      ALL about AWS EBS Provisioned IOPS - feature and resources

      AWS has recently announced EBS Provisioned IOPS feature, a new Elastic Block Store volume type for running high performance databases in the cloud. Provisioned IOPS are designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, you can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume.
      Simple comparison between the standard volumes and provisioned IOPS volumes:

      Amazon EBS Standard volumes
      • Offer cost effective storage for applications with moderate or bursty I/O requirements.
      • Deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS.
      • Are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.
      • $0.10 per GB-month of provisioned storage
      • $0.10 per 1 million I/O requests

      Amazon EBS Provisioned IOPS volumes
      • Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases.
      • Amazon EBS currently supports up to 1000 IOPS per Provisioned IOPS volume, with higher limits coming soon.
      • Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.
      • $0.125 per GB-month of provisioned storage
      • $0.10 per provisioned IOPS-month
      AWS has compiled some interesting resources for the users:

      Our recent release of the EBS Provisioned IOPS feature (blog post, explanatory video, EBS home page) has met with a very warm reception. Developers all over the world are already making use of this important and powerful new EC2 feature. I would like to make you aware of some other new resources and blog posts to help you get the most from your EBS volumes.
      • The EC2 FAQ includes answers to a number of important performance and architecture questions about Provisioned IOPS.
      • The EC2 API tools have been updated and now support the creation of Provisioned IOPS volumes. The newest version of the ec2-create-volume tool supports the --type and --iops options. For example, the following command will create a 500 GB volume with 1000 Provisioned IOPS:
        $ ec2-create-volume --size 500 --availability-zone us-east-1b --type io1 --iops 1000
      • Eric Hammond has written a detailed migraton guide to show you how to convert a running EC2 instance to an EBS-Optimized EC2 instance with Provisioned IOPS volumes. It is a very handy post, and it also shows off the power of programmatic infrastructure.
      • I have been asked about the applicability of existing EC2 Reserved Instances to the new EBS-Optimized instances. Yes, they apply, and you pay only the additional hourly charge. Read our new FAQ entry to learn more.
      • I have also been asked about the availability of EBS-Optimized instances for more instance types. We intend to support other instance types based on demand. Please feel free to let us know what you need by posting a comment on this blog or in the EC2 forum.
      • The folks at CloudVertical have written a guide to understanding new AWS I/O options and costs.
      • The team at Stratalux wrote a very informative blog post, Putting Amazon's Provisoned IOPS to the Test. Their conclusion:
        "Based upon our tests PIOPS definitely provides much needed and much sought after performance improvements over standard EBS volumes. I’m glad to see that Amazon has heeded the calls of its customers and developed a persistent storage solution optimized for database workloads."
      EBS Provisioned IOPS We have also put together a new guide to benchmarking provisioned IOPS volumes. The guide shows you how to set up and run high-quality, repeatable benchmarks on Linux and Windows using the fio, Oracle Orion, and SQLIO tools. The guide will walk you through the following steps:
      • Launching an EC2 instance.
      • Creating Provisioned IOPS EBS volumes.
      • Attaching the volumes to the instance.
      • Creating a RAID from the volumes.
      • Installing the appropriate benchmark tool.
      • Benchmarking the I/O performance of your volumes.
      • Deleting the volumnes and terminate the instance.
      Since I like to try things for myself, I created six 100 GB volumes, each provisioned for 1000 IOPS:
      EBS Provisioned IOPS
      Then I booted up an EBS-Optimized EC2 instance, built a RAID, and ran fio. Here's what I saw in the AWS Management Console's CloudWatch charts after the run. Each volume was delivering 1000 IOPS, as provisioned:
      EBS Provisioned IOPS
      Here's an excerpt from the results:
      fio_test_file: (groupid=0, jobs=32): err= 0: pid=23549: Mon Aug 6 14:01:14 2012
      read : io=123240MB, bw=94814KB/s, iops=5925 , runt=1331000msec
      clat (usec): min=356 , max=291546 , avg=5391.52, stdev=8448.68
      lat (usec): min=357 , max=291547 , avg=5392.91, stdev=8448.68
      clat percentiles (usec):
      | 1.00th=[ 418], 5.00th=[ 450], 10.00th=[ 478], 20.00th=[ 548],
      | 30.00th=[ 596], 40.00th=[ 668], 50.00th=[ 892], 60.00th=[ 1160],
      | 70.00th=[ 3152], 80.00th=[10432], 90.00th=[20864], 95.00th=[26752],
      | 99.00th=[29824], 99.50th=[30336], 99.90th=[31360], 99.95th=[31872],
      | 99.99th=[37120]
      Read the benchmarking guide to learn more about running the benchmarks and interpreting the results.

      SOURCE for resources :

      Announcing AWS Elastic Beanstalk support for Python, and seamless database integration

      It’s a good day to be a Python developer: AWS Elastic Beanstalk now supports Python applications! If you’re not familiar with Elastic Beanstalk, it’s the easiest way to deploy and manage scalable PHP, Java, .NET, and now Python applications on AWS. You simply upload your application, and Elastic Beanstalk automatically handles all of the details associated with deployment including provisioning of Amazon EC2 instances, load balancing, auto scaling, and application health monitoring.

      Elastic Beanstalk supports Python applications that run on the familiar Apache HTTP server and WSGI. In other words, you can run any Python applications, including your Django applications, or your Flask applications. Elastic Beanstalk supports a rich set of tools to help you develop faster. You can use eb and Git to quickly develop and deploy from the command line. You can also use the AWS Management Console to manage your application and configuration.

      The Python release brings with it many platform improvements to help you get your application up and running more quickly and securely. Here are a few of the highlights below:

      Integration with Amazon RDS

      Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud, making it a great fit for scalable web applications running on Elastic Beanstalk.

      If your application requires a relational database, Elastic Beanstalk can create an Amazon RDS database instance to use with your application. The RDS database instance is automatically configured to communicate with the Amazon EC2 instances running your application.
      AWS RDS Configuration Details

      A console screenshot showing RDS configuration options when launching a newAWS Elastic Beanstalk environment.

      Once the RDS database instance is provisioned, you can retrieve information about the database from your application using environment variables:

      import os
      if 'RDS_HOSTNAME' in os.environ:
      DATABASES = {
      'default': {
      'ENGINE': 'django.db.backends.mysql',
      'NAME': os.environ['RDS_DB_NAME'],
      'USER': os.environ['RDS_USER'],
      'PASSWORD': os.environ['RDS_PASSWORD'],
      'HOST': os.environ['RDS_HOSTNAME'],
      'PORT': os.environ['RDS_PORT'],

      To learn more about using Amazon RDS with Elastic Beanstalk, visit “Using Amazon RDS with Python” in the Developer Guide.

      Customize your Python Environment
      You can customize the Python runtime for Elastic Beanstalk using a set of declarative text files within your application. If your application contains a requirements.txt in its top level directory, Elastic Beanstalk will automatically install the dependencies using pip.

      Elastic Beanstalk is also introducing a new configuration mechanism that allows you to install packages from yum, run setup scripts, and set environment variables. You simply create a “.ebextensions” directory inside your application and add a “python.config” file in it. Elastic Beanstalk loads this configuration file and installs the yum packages, runs any scripts, and then sets environment variables. Here is a sample configuration file that syncs the database for a Django application:

      command: " syncdb --noinput"
      leader_only: true
      DJANGO_SETTINGS_MODULE: "mysite.settings"
      WSGIPath: "mysite/"

      Snapshot your logs

      To help debug problems, you can easily take a snapshot of your logs from the AWS Management console. Elastic Beanstalk aggregates the top 100 lines from many different logs, including the Apache error log, to help you squash those bugs.
      Elastic Beanstalk Console-snapshot-logs

      The snapshot is saved to S3 and is automatically deleted after 15 minutes. Elastic Beanstalk can also automatically rotate the log files to Amazon S3 on an hourly basis so you can analyze traffic patterns and identify issues. To learn more, visit “Working with Logs” in the Developer Guide.

      Support for Django and Flask

      Using the customization mechanism above, you can easily deploy and run your Django and Flask applications on Elastic Beanstalk.
      For more information about using Python and Elastic Beanstalk, visit the Developer Guide.

      Sunday, 19 August 2012

      yoyoclouds: The Cloud Revolution

      The Cloud Revolution Cloud computing spending will account for 25% of annual IT expenditure growth by 2012 and nearly a third of the growth the following year. 

       “The battle for Cloud dominance is heating up, with the release of Office 365, it will be very interesting to see where the next big play comes from.”

      Read more here.

      Big Data Infographic: The 2012 London Summer Games

      Big Data Infographic: The 2012 London Summer Games

      Recent years have seen a lot of development in the cloud computing sphere. Big data is believed by many to be here to stay, and a lot of real investment is touted to happen in this particular area. Such a trend is quite exciting, as new, better and more powerful infrastructure will be needed to support all this. So,  a lot of further development is on the way to accommodate these computing perimeters.

      Does agility matter?

      Your company might have been in the business for many years, or it may be a newcomer to the field – JIT (Just in Time) deployment of services is very important for both types of organizations. This is also critical to the success of any of the types of company described above. While small business owners may sometimes think that there is a cost control through traditional IT, they really need to consider the agility that the cloud brings to their businesses. Cloud computing, when moved and executed properly, can help companies tap market opportunities to the best possible effect, due to the extended flexibility and agility it has to offer. The recent acquisition of Cloud dot com by Citrix clearly proves that there has been an increase in interest in cloud computing technology. It is also touted in the networking space that many emerging SDN players are bound to be acquisition targets for major companies keeping an eye on the developments in the sphere.

      Read more here.