Showing posts with label Amazon. Show all posts
Showing posts with label Amazon. Show all posts

Tuesday 16 July 2013

QuickBase & Amazon Web Services- Closer Look

A very nice way of representing AWS and Cloud Concepts...



If above is not visible, access it at the SOURCE.

Thursday 27 September 2012

AWS Announcement : High Performance Provisioned IOPS Storage For Amazon RDS

 
After announcing EBS Provisioned IOPS offering lately which allows you to specify both volume size and volume performance in term of number of I/O operations per second (IOPS),  AWS has now announced High Performance Provisioned IOPS Storage for Amazon RDS.
 
You can now create an RDS database instance and specify your desired level of IOPS in order to get more consistent throughput and performance.

Amazon RDS Provisioned IOPS is immediately available for new database instances in the US East (N. Virginia), US West (N. California), and EU West (Ireland) Regions and AWS plan to launch in other AWS Regions in the coming months.
 
AWs is rolling this out in two phases. Read on more the extract from the announcement on AWS Blog by Jeff.
 
 
    
                   We are rolling this out in two stages. Here's the plan:
 
  • Effective immediately, you can provision new RDS database instances with 1,000 to 10,000 IOPS, and with 100GB to 1 TB of storage for MySQL and Oracle databases. If you are using SQL Server, the maximum IOPS you can provision is 7,000 IOPS. All other RDS features including Multi-AZ, Read Replicas, and the Virtual Private Cloud, are also supported.
  •  
  • In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. If you want to migrate an existing database instance to Provisioned IOPS storage immediately, you can export your data and re-import it into a new database instance equipped with Provisioned IOPS storage.

We expect database instances with RDS Provisioned IOPS to be used in demanding situations. For example, they are a perfect host for I/O-intensive transactional (OLTP) workloads.
We recommend that customers running production database workloads use Amazon RDS Provisioned IOPS for the best possible performance. (By the way, for mission critical OLTP workloads, you should also consider adding the Amazon RDS Multi-AZ option to improve availability.)


Check out the video with Rahul Pathak of the Amazon RDS team to learn more about this new feature and how some of AWS customers were using it:




Responses from AWS customers :

  • AWS customer Flipboard uses RDS to deliver billions of page flips each month to millions of mobile phone and tablet users. Sang Chi, Data Infrastructure Architect at Flipboard told us:
"We want to provide the best possible reading and content delivery experience for a rapidly growing base of users and publishers. This requires us not only to use a high performance database today but also to continue to improve our performance in the future. Throughput consistency is critical for our workloads. Based on results from our early testing, we are very excited about Amazon RDS Provisioned IOPS and the impact it will have on our ability to scale. We’re looking forward to scaling our database applications to tens of thousands of IOPS and achieving consistent throughput to improve the experience for our users."
  • AWS customer Shine Technologies uses RDS for Oracle to build complex solutions for enterprise customers. Adam Kierce, their Director said:
"Amazon RDS Provisioned IOPS provided a turbo-boost to our enterprise class database-backed applications. In the past, we have invested hundreds of days in time consuming and costly code based performance tuning, but with Amazon RDS Provisioned IOPS we were able to exceed those performance gains in a single day. We have demanding clients in the Energy, Telecommunication, Finance and Retail industries, and we fully expect to move all our Oracle backed products onto AWS using Amazon RDS for Oracle over the next 12 months. The increased performance of Amazon's RDS for Oracle with Provision IOPS is an absolute game changer, because it delivers more (performance) for less (cost)."
 

Wednesday 19 September 2012

AWS Week in Review - September 10th to September 16th, 2012

 

Let's take a quick look at what happened in AWS-land last week:
Tuesday, September 11
Wednesday, September 12
Thursday, September 13


SOURCE

Tuesday 18 September 2012

Amazon VPC - New Additions

 
AWS has added 3 new features / options to the  Amazon Virtual Private Cloud (VPC) service.
 
 
PFB extract for the two blogs written by Jeff on the same:
 
 
The Amazon Virtual Private Cloud (VPC) gives you the power to create a private, isolated section of the AWS Cloud. You have full control of network addressing. Each of your VPCs can include subnets (with access control lists), route tables, and gateways to your existing network and to the Internet.
 
You can connect your VPC to the Internet via an Internet Gateway and enjoy all the flexibility of Amazon EC2 with the added benefits of Amazon VPC. You can also setup an IPsec VPN connection to your VPC, extending your corporate data center into the AWS Cloud. Today we are adding two options to give you additional VPN connection flexibility:
  1. You can now create Hardware VPN connections to your VPC using static routing. This means that you can establish connectivity using VPN devices that do not support BGP such as Cisco ASA and Microsoft Windows Server 2008 R2. You can also use Linux to establish a Hardware VPN connection to your VPC. In fact, any IPSec VPN implementation should work.
  2. You can now configure automatic propagation of routes from your VPN and Direct Connect links (gateways) to your VPC's routing tables. This will make your life easier as you won’t need to create static route entries in your VPC route table for your VPN connections. For instance, if you’re using dynamically routed (BGP) VPN connections, your BGP route advertisements from your home network can be automatically propagated into your VPC routing table.
If your VPN hardware is capable of supporting BGP, this is still the preferred way to go as BGP performs a robust liveness check on the IPSec tunnel. Each VPN connection uses two tunnels for redundancy; BGP simplifies the failover procedure that is invoked when one VPN tunnel goes down.

Sunday 16 September 2012

Create an AWS Account - Free Usage Tier

Amazon Web Services helps its new customers to get started into the cloud by introducing a free usage tier. This tier is available to customers for 12 months. 

Below are the highlights of AWS’s free usage tiers. All are available for one year (except SWF, DynamoDB, SimpleDB, SQS, and SNS which are free indefinitely):



NOTE:
Below Image is updated as of October 1st,2012 for AWS RDS announcement. For latest updates, please check AWS Free Tier for more details
 

 

Do check AWS Free Tier for more details.

How to get started:



Friday 14 September 2012

Amazon EC2 Reserved Instance Marketplace

Superbly detailed blog Post by Jeff on Amazon EC2 Reserved Instance Marketplace

No more words need to be added....


EC2 Options
I often tell people that cloud computing is equal parts technology and business model. Amazon EC2 is a good example of this; you have three options to choose from:
  • You can use On-Demand Instances, where you pay for compute capacity by the hour, with no upfront fees or long-term commitments. On-Demand instances are recommended for situations where you don't know how much (if any) compute capacity you will need at a given time.
  • If you know that you will need a certain amount of capacity, you can buy an EC2 Reserved Instance. You make a low, one-time upfront payment, reserve it for a one or three year term, and pay a significantly lower hourly rate. You can choose between Light Utilization, Medium Utilization, and Heavy Utilization Reserved Instances to further align your costs with your usage.
  • You can also bid for unused EC2 capacity on the Spot Market with a maximum hourly price you are willing to pay for a particular instance type in the Region and Availability Zone of your choice. When the current Spot Price for the desired instance type is at or below the price you set, your application will run.
Reserved Instance Marketplace
Today we are increasing the flexibility of the EC2 Reserved Instance model even more with the introduction of the Reserved Instance Marketplace. If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity. If you need additional capacity, you can compare the upfront prices and durations of Reserved Instances on the marketplace to the upfront prices of one and three year Reserved Instances available directly from AWS. The Reserved Instances in the Marketplace are functionally identical to other Reserved Instances and have the then-current hourly rates, they will just have less than a full term and a different upfront price.


AWS Expands in Japan

Amazon Web Services (AWS) is expanding in Japan with the addition of a third Availability Zone.
The move means that AWS will most likely be adding more data centers to keep up with the steady demand in service it has had since it first began offering its service in Tokyo 18 months ago.

For people who are not aware of Availability zones and Regions of AWS -

Amazon Web Services serves hundreds of thousands of customers in more than 190 countries.
Currently, AWS has spanned across 8 regions around the Globe.
Each region has multiple availability zones.
Each availability zone can encompass multiple data centers.

See a detailed list of offerings at all AWS locations

Extracted below a nice blog post by Jeff:

We announced an AWS Region in Tokyo about 18 months ago. In the time since the launch, our customers have launched all sorts of interesting applications and businesses there. Here are a few examples:
    • Cookpad.com is the top recipe site in Japan. They are hosted entirely on AWS, and handle more than 15 million users per month.
    • KAO is one of Japan's largest manufacturers of cosmetics and toiletries. They recently migrated their corporate site to the AWS cloud.
    • Fukoka City launched the Kawaii Ward project to promote tourism to the virtual city. After a member of the popular Japanese idol group AKB48 raised awareness of this site, virtual residents flocked to the site to sign up for an email newsletter. They expected 10,000 registrations in the first week and were pleasantly surprised to receive over 20,000.
Demand for AWS resources in Japan has been strong and steady, and we've been expanding the region accordingly. You might find it interesting to know that an AWS region can be expanded in two different ways. First, we can add additional capacity to an existing Availability Zone, spanning multiple datacenters if necessary. Second, we can create an entirely new Availability Zone.
Over time, as we combine both of these approaches, a single AWS region can grow to encompass many datacenters. For example, the US East (Northern Virginia) region currently occupies more than ten datacenters structured as multiple Availability Zones.
 
AWS Tokyo Region and Availability Zones
 
Today, we are expanding the Tokyo region with the addition of a third Availability Zone. 
This will add capacity and will also provide you with additional flexibility. As is always the case with AWS, untargeted launches of EC2 instances will now make use of this zone with no changes to existing applications or configurations. If you are currently targeting specific Availability Zones, please make sure that your code can handle this new option.

Monday 10 September 2012

Getting Started with Amazon Glacier



Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. In order to keep costs low, Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. With Amazon Glacier, customers can reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions.

Retrieving archives from Amazon Glacier requires the initiation of a job. Jobs typically complete in 3 to 5 hours. You organize your archives in vaults
The quick start video for Amazon Glacier walks you through on how-to use the AWS Management console to create vaults in Amazon Glacier.

To upload the data to Amazon Glacier, users must use the SDKs/APIs provided by AWS for uploading the data.








In case your data is too huge to be uploaded through internet, you can make use of another AWS Service - AWS Import/Export. It accelerates moving large amounts of data into and out of AWS using portable storage devices (see supported devices) for transport. You can ship your device along with its interface connectors, and power supply to AWS. When your package arrives, it will be processed and securely transferred to an AWS data center, where your device will be attached to an AWS Import/Export station. After the data load completes, the device will be returned to you.

For more information, please visit the Amazon Glacier Product Page and the Amazon Glacier Developer Guide.

Building Highly Available, Scalable Web Properties with AWS

From the AWS Webinar Series: Building Highly Available, Scalable Web Properties with AWS 

A very nicely compiled webinar for understand various AWS Services and design principles.

This webinar recording focuses on basic properities for Building Highly Available, Scalable Web Applications on AWS Cloud.

These properties are:

  • Elasticity
  • Design for Failure
  • Loose Coupling
  • Security
  • Performance
 


Saturday 8 September 2012

The AWS Report - Colin Lazier, Amazon Glacier

Want to know what Amazon's new service Glacier is and how people are using it?

Check out the new AWS Report with Colin Lazier.

For today's episode of The AWS Report, I spoke to Colin Lazier, a Senior Development Manager on the AWS Storage Team. Colin and I talked about Amazon Glacier and how it can be used to archive data for long periods of time. I learned that Glacier uses anti-entropy techniques to guard against data loss. 
We also talked about Glacier's retrieval model, and our expectation that third parties will build archiving and indexing tools around Glacier's storage and retrieval functions.




Friday 7 September 2012

Infographic : How much do Apple, Google and Amazon spend on advertising?


Last year, U.S. tech company advertisers continued to increase ad and promo spending so much so that Google, Amazon and Apple are listed among the six companies with the highest ad-spending growth rates. Please have a look at the complete list with a more detailed breakdown o


Tech Company Ad Spends
SOURCE : OnlineBusinessDegree.org

Wednesday 5 September 2012

AWS Management Console Improvements to EC2 Tab

AWS recently made some improvements to the EC2 tab of the AWS Management Console. It is now easier to access the AWS Marketplace and to configure attached storage (EBS volumes and ephemeral storage) for EC2 instances.

Read on a good post by Jeff.

Marketplace Access

This one is really simple, but definitely worth covering. You can now access the AWS Marketplace from the Launch Instances Wizard:


AWS Marketplace

After you enter your search terms and click the Go button, the Marketplace results page will open in a new tab. Here's what happens when I search for wordpress:

Monday 3 September 2012

Amazon S3 - Cross Origin Resource Sharing Support

GREAT NEWS!!!


AWS has announced support for Cross-Origin Resource Sharing (CORS) in Amazon S3.
You can now easily build web applications that use JavaScript and HTML5 to interact with resources in Amazon S3, enabling you to implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content. Until now, you needed to run a custom proxy server between your web application and Amazon S3 to support these capabilities. A custom proxy server was required because web browsers limit the way web pages loaded from one site (e.g., mywebsite.com) can interact with content from another location (e.g., a location in Amazon S3 like assets.mywebsite.com.s3.amazonaws.com). Amazon S3’s support for CORS replaces the need for this custom proxy server by instructing the web browser to selectively enable these cross-site interactions.
Configuring your bucket for CORS is easy. To get started, open the Amazon S3 Management Console, and follow these simple steps:

1) Right click on your Amazon S3 bucket and open the “Properties” pane.
2) Under the “Permissions” tab, click the “Add CORS configuration” button to add a new CORS configuration. You can then specify the websites (e.g., "mywebsite.com”) that should have access to your bucket, and the specific HTTP request methods (e.g., “GET”) you wish to allow.
3) Click Save.
For more information on using CORS with Amazon S3, review the Amazon S3 Developer Guide.



Monday 27 August 2012

Infographic: Demystifying AWS - Revealing Behind the scenes usage

Amazon Web Services (AWS) is the biggest public cloud around, yet what goes on behind the scenes remains a mystery.

Read on for a good Infographic by newvem blog !


"For heavy users, such as enterprise level CIOs, AWS’s “Reserved Instances” are a cost effective model to scale their cloud activity and benefit from the full service offering that Amazon provides.


The infographic is based on analysis made by our Reserved Instance Decision Making Tool. This advanced analytics tool can help enterprise CIOs to capture the added value and benefit by:
  • Ensuring that reserved instances meet cost and performance expectations.
  • Identifying consistent onOn-demand Demand usage that can be shifted to reserved Reserved instances.
  • Tracking Reserved Instance expiration dates and recommend actions for renewal and scale up and down.



SOURCE






 

Friday 24 August 2012

AWS New Whitepaper: Mapping and GeoSpatial Analysis in the Cloud Using ArcGIS

Great new whitepaper by Jinesh Varia...


Esri is one of the leaders in the Geographic Information Systems (GIS) industry and one of the largest privately held software companies focused on mapping and geospatial applications in the world with offices in more than 100 countries. Both public and private sector organizations use Esri technology to analyze and manage their geographic information and make better decisions – uses range from planning cities and improving the quality of life for residents, to site selection, customer analytics, and streamlining logistics.

Esri and AWS have been working together since 2008 to bring the power of GIS to the masses. The AWS Partner Team recently attended the 2012 Esri International User Conference with over 14,000+ attendees, 300 exhibitors and a large number of ecosystem partners. A cloud computing theme dominated the conference.
Esri and AWS have co-authored a whitepaper, "Mapping and GeoSpatial Analysis Using ArcGIS", to provide users who have interest in performing spatial analysis using their data with complimentary datasets.

The paper discusses how users can publish and analyze imagery data (such as satellite imagery, or aerial imagery) and create and publish tile cache map services from spatially referenced data (such as data with x/y points, lines, polygons) in AWS using ArcGIS.

Download PDF: Mapping and GeoSpatial Analysis Using ArcGIS

ArcGIS_AWSThe paper focuses on imagery because that has been the most challenging data type to manage in the cloud, but the approaches discussed are general enough to apply to any type of data.

It not only provides architecture guidance on how to scale ArcGIS servers in the cloud but also provides step-by-step guidance on publishing map services in the cloud.

For more information on GeoApps in the AWS Cloud, see the presentation -
The Cloud as a Platform for Geo below:
GeoApps in the AWS Cloud - Jinesh Varia from Amazon Web Services


SOURCE
 

Tuesday 21 August 2012

Deploy a .NET Application to AWS Elastic Beanstalk with Amazon RDS Using Visual Studio

In this video, walk you through deploying an application to AWS Elastic Beanstalk (link: http://aws.amazon.com/elasticbeanstalk/), configuring an Amazon RDS for SQL Server DB instance (link: http://aws.amazon.com/rds/), and managing your configuration, all from the confines of Visual Studio. The AWS Toolkit for Visual Studio streamlines your development, deployment, and testing inside your familiar IDE.
To learn more about AWS Elastic Beanstalk and Amazon RDS, visit the AWS Elastic Beanstalk Developer Guide at http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_NE....



SOURCE



Amazon CloudSearch - Start Searching in One Hour for Less Than $100 / Month


Extract from Amazon Web Service Evangelist Jeff Barr's CloudSearch blog post for more information about how you can start searching in an hour for less than $100 a month...

Continuing along in our quest to give you the tools that you need to build ridiculously powerful web sites and applications in no time flat at the lowest possible cost, I'd like to introduce you to Amazon CloudSearch. If you have ever searched Amazon.com, you've already used the technology that underlies CloudSearch. You can now have a very powerful and scalable search system (indexing and retrieval) up and running in less than an hour.

You, sitting in your corporate cubicle, your coffee shop, or your dorm room, now have access to search technology at a very affordable price. You can start to take advantage of many years of Amazon R&D in the search space for just $0.12 per hour (I'll talk about pricing in depth later).


What is Search?

Search plays a major role in many web sites and other types of online applications. The basic model is seemingly simple. Think of your set of documents or your data collection as a book or a catalog, composed of a number of pages. You know that you can find the desired content quickly and efficiently by simply consulting the index.

Search does the same thing by indexing each document in a way that facilitates rapid retrieval. You enter some terms into a search box and the site responds (rather quickly if you use CloudSearch) with a list of pages that match the search terms.

As is the case with many things, this simple model masks a lot of complexity and might raise a lot of questions in your mind. For example:
  1. How efficient is the search? Did the search engine simply iterate through every page, looking for matches, or is there some sort of index?
  2. The search results were returned in the form of an ordered list. What factor(s) determined which documents were returned, and in what order (commonly known as ranking)? How are the results grouped?
  3. How forgiving or expansive was the search? Did a search for "dogs" return results for "dog?" Did it return results for "golden retriever," or "pet?"
  4. What kinds of complex searches or queries can be used? Does the result for "dog training" return the expected results. Can you search for "dog" in the Title field and "training" in the Description?
  5. How scalable is the search? What if there are millions or billions of pages? What if there are thousands of searches per hour? Is there enough storage space?
  6. What happens when new pages are added to the collection, or old pages are removed? How does this affect the search results?
  7. How can you efficiently navigate through and explore search results? Can you group and filter the search results in ways that take advantage of multiple named fields (often known as a faceted search).
Needless to say, things can get very complex very quickly. Even if you can write code to do some or all of this yourself, you still need to worry about the operational aspects. We know that scaling a search system is non-trivial. There are lots of moving parts, all of which must be designed, implemented, instantiated, scaled, monitored, and maintained. As you scale, algorithmic complexity often comes in to play; you soon learn that algorithms and techniques which were practical at the beginning aren't always practical at scale.


What is Amazon CloudSearch?

Amazon CloudSearch is a fully managed search service in the cloud. You can set it up and start processing queries in less than an hour, with automatic scaling for data and search traffic, all for less than $100 per month.

CloudSearch hides all of the complexity and all of the search infrastructure from you. You simply provide it with a set of documents and decide how you would like to incorporate search into your application.

You don't have to write your own indexing, query parsing, query processing, results handling, or any of that other stuff. You don't need to worry about running out of disk space or processing power, and you don't need to keep rewriting your code to add more features.

With CloudSearch, you can focus on your application layer. You upload your documents, CloudSearch indexes them, and you can build a search experience that is custom-tailored to the needs of your customers.


How Does it Work?

The Amazon CloudSearch model is really simple, but don't confuse simple, with simplistic -- there's a lot going on behind the scenes!

Here's all you need to do to get started (you can perform these operations from the AWS Management Console, the CloudSearch command line tools, or through the CloudSearch APIs):
  1. Create and configure a Search Domain. This is a data container and a related set of services. It exists within a particular Availability Zone of a single AWS Region (initially US East).
  2. Upload your documents. Documents can be uploaded as JSON or XML that conforms to our Search Document Format (SDF). Uploaded documents will typically be searchable within seconds.  You can, if you'd like, send data over an HTTPS connection to protect it while it is transit.
  3. Perform searches.
There are plenty of options and goodies, but that's all it takes to get started.

Amazon CloudSearch applies data updates continuously, so newly changed data becomes searchable in near real-time. Your index is stored in RAM to keep throughput high and to speed up document updates. You can also tell CloudSearch to re-index your documents; you'll need to do this after changing certain configuration options, such as stemming (converting variations of a word to a base word, such as "dogs" to "dog") or stop words (very common words that you don't want to index).
Amazon CloudSearch has a number of advanced search capabilities including faceting and fielded search:

Faceting allows you to categorize your results into sub-groups, which can be used as the basis for another search. You could search for "umbrellas" and use a facet to group the results by price, such as $1-$10, $10-$20, $20-$50, and so forth. CloudSearch will even return document counts for each sub-group.
Fielded searching allows you to search on a particular attribute of a document. You could locate movies in a particular genre or actor, or products within a certain price range.

 
Search Scaling
Behind the scenes, CloudSearch stores data and processes searches using search instances. Each instance has a finite amount of CPU power and RAM. As your data expands, CloudSearch will automatically launch additional search instances and/or scale to larger instance types. As your search traffic expands beyond the capacity of a single instance, CloudSearch will automatically launch additional instances and replicate the data to the new instance. If you have a lot of data and a high request rate, CloudSearch will automatically scale in both dimensions for you.

Amazon CloudSearch will automatically scale your search fleet up to a maximum of 50 search instances. We'll be increasing this limit over time; if you have an immediate need for more than 50 instances, please feel free to contact us and we'll be happy to help.

The net-net of all of this automation is that you don't need to worry about having enough storage capacity or processing power. CloudSearch will take care of it for you, and you'll pay only for what you use.

Pricing Model

The Amazon CloudSearch pricing model is straightforward:

You'll be billed based on the number of running search instances. There are three search instance sizes (Small, Large, and Extra Large) at prices ranging from $0.12 to $0.68 per hour (these are US East Region prices, since that's where we are launching CloudSearch).

There's a modest charge for each batch of uploaded data. If you change configuration options and need to re-index your data, you will be billed $0.98 for each Gigabyte of data in the search domain.
There's no charge for in-bound data transfer, data transfer out is billed at the usual AWS rates, and you can transfer data to and from your Amazon EC2 instances in the Region at no charge.

Advanced Searching

Like the other Amazon Web Services, CloudSearch allows you to get started with a modest effort and to add richness and complexity over time. You can easily implement advanced features such as faceted search, free text search, Boolean search expressions, customized relevance ranking, field-based sorting and searching, and text processing options such as stopwords, synonyms, and stemming.

CloudSearch Programming

You can interact with CloudSearch through the AWS Management Console, a complete set of Amazon CloudSearch APIs, and a set of command line tools. You can easily create, configure, and populate a search domain through the AWS Management Console.
Here's a tour, starting with the welcome screen:

Amazon CloudSearch
 
You start by creating a new Search Domain:

Amazon CloudSearch
 
You can then load some sample data. It can come from local files, an Amazon S3 bucket, or several other sources:

Amazon CloudSearch
 
Here's how you choose an S3 bucket (and an optional prefix to limit which documents will be indexed):

Amazon CloudSearch
 
You can also configure your initial set of index fields:

Amazon CloudSearch
 
You can also create access policies for the CloudSeach APIs:

Amazon CloudSearch
 
Your search domain will be initialized and ready to use within twenty minutes:

Amazon CloudSearch
 
Processing your documents is the final step in the initialization process:

Amazon CloudSearch
 
After your documents have been processed you can perform some test searches from the console:

Amazon CloudSearch
 
The CloudSearch console also provides you with full control over a number of indexing options including stopwords, stemming, and synonyms:



 
CloudSearch in Action
Some of our early customers have already deployed some applications powered by CloudSearch. Here's a sampling:
  • Search Technologies has used CloudSearch to index the Wikipedia (see the demo).
  • NewsRight is using CloudSearch to deliver search for news content, usage and rights information to over 1,000 publications.
  • ex.fm is using CloudSearch to power their social music discovery website.
  • CarDomain is powering search on their social networking website for car enthusiasts.
  • Sage Bionetworks is powering search on their data-driven collaborative biological research website.
  • Smugmug is using CloudSearch to deliver search on their website for over a billion photos.

SOURCE

    AWS Direct Connect - New Locations and Console Support

    On 13th August AWS has announced new locations and console support for AWS Direct Connect. Great article by Jeff...

    Did you know that you can use AWS Direct Connect to set up a dedicated 1 Gbps or 10 Gbps network connect from your existing data center or corporate office to AWS?

    New Locations

    Today we are adding two additional Direct Connect locations so that you have even more ways to reduce your network costs and increase network bandwidth throughput. You also have the potential for a more consistent experience. Here is the complete list of locations:
    If you have your own equipment running at one of the locations listed above, you can use Direct Connect to optimize the connection to AWS. If your equipment is located somewhere else, you can work with one of our APN Partners supporting Direct Connect to establish a connection from your location to a Direct Connection Location, and from there on to AWS.

    Console Support

    Up until now, you needed to fill in a web form to initiate the process of setting up a connection. In order to make the process simpler and smoother, you can now start the ordering process and manage your Connections through the AWS Management Console.
    Here's a tour. You can establish a new connection by selecting the Direct Connect tab in the console:

    AWS Direct connect Establish a new connection
     
    After you confirm your choices you can place your order with one final click:

    AWS Direct connect Establish a new connection
     
    You can see all of your connections in a single (global) list:

    AWS Direct connect connections
     
    You can inspect the details of each connection:

    AWS Direct connect - connection details
     
    You can then create a Virtual Interface to your connection. The interface can connected to one of your Virtual Private Clouds or it can connect to the full set of AWS services:

    AWS Direct connect

    AWS Direct connect
     
    You can even download a router configuration file tailored to the brand, model, and version of your router:

    AWS Direct connect
     
    Get Connected
    And there you have it! Learn more about AWS Direct Connect and get started today.

    SOURCE
     

    ALL about AWS EBS Provisioned IOPS - feature and resources

    AWS has recently announced EBS Provisioned IOPS feature, a new Elastic Block Store volume type for running high performance databases in the cloud. Provisioned IOPS are designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, you can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume.
    Simple comparison between the standard volumes and provisioned IOPS volumes:

    Amazon EBS Standard volumes
    • Offer cost effective storage for applications with moderate or bursty I/O requirements.
    • Deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS.
    • Are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.
    • $0.10 per GB-month of provisioned storage
    • $0.10 per 1 million I/O requests

    Amazon EBS Provisioned IOPS volumes
    • Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases.
    • Amazon EBS currently supports up to 1000 IOPS per Provisioned IOPS volume, with higher limits coming soon.
    • Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.
    • $0.125 per GB-month of provisioned storage
    • $0.10 per provisioned IOPS-month
    AWS has compiled some interesting resources for the users:

    Our recent release of the EBS Provisioned IOPS feature (blog post, explanatory video, EBS home page) has met with a very warm reception. Developers all over the world are already making use of this important and powerful new EC2 feature. I would like to make you aware of some other new resources and blog posts to help you get the most from your EBS volumes.
    • The EC2 FAQ includes answers to a number of important performance and architecture questions about Provisioned IOPS.
    • The EC2 API tools have been updated and now support the creation of Provisioned IOPS volumes. The newest version of the ec2-create-volume tool supports the --type and --iops options. For example, the following command will create a 500 GB volume with 1000 Provisioned IOPS:
      $ ec2-create-volume --size 500 --availability-zone us-east-1b --type io1 --iops 1000
    • Eric Hammond has written a detailed migraton guide to show you how to convert a running EC2 instance to an EBS-Optimized EC2 instance with Provisioned IOPS volumes. It is a very handy post, and it also shows off the power of programmatic infrastructure.
    • I have been asked about the applicability of existing EC2 Reserved Instances to the new EBS-Optimized instances. Yes, they apply, and you pay only the additional hourly charge. Read our new FAQ entry to learn more.
    • I have also been asked about the availability of EBS-Optimized instances for more instance types. We intend to support other instance types based on demand. Please feel free to let us know what you need by posting a comment on this blog or in the EC2 forum.
    • The folks at CloudVertical have written a guide to understanding new AWS I/O options and costs.
    • The team at Stratalux wrote a very informative blog post, Putting Amazon's Provisoned IOPS to the Test. Their conclusion:
      "Based upon our tests PIOPS definitely provides much needed and much sought after performance improvements over standard EBS volumes. I’m glad to see that Amazon has heeded the calls of its customers and developed a persistent storage solution optimized for database workloads."
    EBS Provisioned IOPS We have also put together a new guide to benchmarking provisioned IOPS volumes. The guide shows you how to set up and run high-quality, repeatable benchmarks on Linux and Windows using the fio, Oracle Orion, and SQLIO tools. The guide will walk you through the following steps:
    • Launching an EC2 instance.
    • Creating Provisioned IOPS EBS volumes.
    • Attaching the volumes to the instance.
    • Creating a RAID from the volumes.
    • Installing the appropriate benchmark tool.
    • Benchmarking the I/O performance of your volumes.
    • Deleting the volumnes and terminate the instance.
    Since I like to try things for myself, I created six 100 GB volumes, each provisioned for 1000 IOPS:
    EBS Provisioned IOPS
    Then I booted up an EBS-Optimized EC2 instance, built a RAID, and ran fio. Here's what I saw in the AWS Management Console's CloudWatch charts after the run. Each volume was delivering 1000 IOPS, as provisioned:
    EBS Provisioned IOPS
    Here's an excerpt from the results:
    fio_test_file: (groupid=0, jobs=32): err= 0: pid=23549: Mon Aug 6 14:01:14 2012
    read : io=123240MB, bw=94814KB/s, iops=5925 , runt=1331000msec
    clat (usec): min=356 , max=291546 , avg=5391.52, stdev=8448.68
    lat (usec): min=357 , max=291547 , avg=5392.91, stdev=8448.68
    clat percentiles (usec):
    | 1.00th=[ 418], 5.00th=[ 450], 10.00th=[ 478], 20.00th=[ 548],
    | 30.00th=[ 596], 40.00th=[ 668], 50.00th=[ 892], 60.00th=[ 1160],
    | 70.00th=[ 3152], 80.00th=[10432], 90.00th=[20864], 95.00th=[26752],
    | 99.00th=[29824], 99.50th=[30336], 99.90th=[31360], 99.95th=[31872],
    | 99.99th=[37120]
    Read the benchmarking guide to learn more about running the benchmarks and interpreting the results.

    SOURCE for resources : http://aws.typepad.com/aws/2012/08/ebs-provisioned-iops-some-interesting-resources.html