Showing posts with label Shared. Show all posts
Showing posts with label Shared. Show all posts

Tuesday 21 August 2012

AWS Direct Connect - New Locations and Console Support

On 13th August AWS has announced new locations and console support for AWS Direct Connect. Great article by Jeff...

Did you know that you can use AWS Direct Connect to set up a dedicated 1 Gbps or 10 Gbps network connect from your existing data center or corporate office to AWS?

New Locations

Today we are adding two additional Direct Connect locations so that you have even more ways to reduce your network costs and increase network bandwidth throughput. You also have the potential for a more consistent experience. Here is the complete list of locations:
If you have your own equipment running at one of the locations listed above, you can use Direct Connect to optimize the connection to AWS. If your equipment is located somewhere else, you can work with one of our APN Partners supporting Direct Connect to establish a connection from your location to a Direct Connection Location, and from there on to AWS.

Console Support

Up until now, you needed to fill in a web form to initiate the process of setting up a connection. In order to make the process simpler and smoother, you can now start the ordering process and manage your Connections through the AWS Management Console.
Here's a tour. You can establish a new connection by selecting the Direct Connect tab in the console:

AWS Direct connect Establish a new connection
 
After you confirm your choices you can place your order with one final click:

AWS Direct connect Establish a new connection
 
You can see all of your connections in a single (global) list:

AWS Direct connect connections
 
You can inspect the details of each connection:

AWS Direct connect - connection details
 
You can then create a Virtual Interface to your connection. The interface can connected to one of your Virtual Private Clouds or it can connect to the full set of AWS services:

AWS Direct connect

AWS Direct connect
 
You can even download a router configuration file tailored to the brand, model, and version of your router:

AWS Direct connect
 
Get Connected
And there you have it! Learn more about AWS Direct Connect and get started today.

SOURCE
 

Announcing AWS Elastic Beanstalk support for Python, and seamless database integration


It’s a good day to be a Python developer: AWS Elastic Beanstalk now supports Python applications! If you’re not familiar with Elastic Beanstalk, it’s the easiest way to deploy and manage scalable PHP, Java, .NET, and now Python applications on AWS. You simply upload your application, and Elastic Beanstalk automatically handles all of the details associated with deployment including provisioning of Amazon EC2 instances, load balancing, auto scaling, and application health monitoring.

Elastic Beanstalk supports Python applications that run on the familiar Apache HTTP server and WSGI. In other words, you can run any Python applications, including your Django applications, or your Flask applications. Elastic Beanstalk supports a rich set of tools to help you develop faster. You can use eb and Git to quickly develop and deploy from the command line. You can also use the AWS Management Console to manage your application and configuration.

The Python release brings with it many platform improvements to help you get your application up and running more quickly and securely. Here are a few of the highlights below:

Integration with Amazon RDS

Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud, making it a great fit for scalable web applications running on Elastic Beanstalk.

If your application requires a relational database, Elastic Beanstalk can create an Amazon RDS database instance to use with your application. The RDS database instance is automatically configured to communicate with the Amazon EC2 instances running your application.
 
AWS RDS Configuration Details

A console screenshot showing RDS configuration options when launching a newAWS Elastic Beanstalk environment.

Once the RDS database instance is provisioned, you can retrieve information about the database from your application using environment variables:



import os
if 'RDS_HOSTNAME' in os.environ:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': os.environ['RDS_DB_NAME'],
'USER': os.environ['RDS_USER'],
'PASSWORD': os.environ['RDS_PASSWORD'],
'HOST': os.environ['RDS_HOSTNAME'],
'PORT': os.environ['RDS_PORT'],
}
}


To learn more about using Amazon RDS with Elastic Beanstalk, visit “Using Amazon RDS with Python” in the Developer Guide.

Customize your Python Environment
You can customize the Python runtime for Elastic Beanstalk using a set of declarative text files within your application. If your application contains a requirements.txt in its top level directory, Elastic Beanstalk will automatically install the dependencies using pip.

Elastic Beanstalk is also introducing a new configuration mechanism that allows you to install packages from yum, run setup scripts, and set environment variables. You simply create a “.ebextensions” directory inside your application and add a “python.config” file in it. Elastic Beanstalk loads this configuration file and installs the yum packages, runs any scripts, and then sets environment variables. Here is a sample configuration file that syncs the database for a Django application:


commands:
syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
"aws:elasticbeanstalk:application:python:environment":
DJANGO_SETTINGS_MODULE: "mysite.settings"
"aws:elasticbeanstalk:container:python":
WSGIPath: "mysite/wsgi.py"


Snapshot your logs

To help debug problems, you can easily take a snapshot of your logs from the AWS Management console. Elastic Beanstalk aggregates the top 100 lines from many different logs, including the Apache error log, to help you squash those bugs.
 
Elastic Beanstalk Console-snapshot-logs

The snapshot is saved to S3 and is automatically deleted after 15 minutes. Elastic Beanstalk can also automatically rotate the log files to Amazon S3 on an hourly basis so you can analyze traffic patterns and identify issues. To learn more, visit “Working with Logs” in the Developer Guide.

Support for Django and Flask

Using the customization mechanism above, you can easily deploy and run your Django and Flask applications on Elastic Beanstalk.
For more information about using Python and Elastic Beanstalk, visit the Developer Guide.


Thursday 28 June 2012

yoyoclouds: Cloud Infographic: The Future Of The Cloud

yoyoclouds: Cloud Infographic: The Future Of The Cloud: Cloud Infographic: The Future Of The Cloud The Future Of The Cloud As barriers to entry lower and the benefits increase, an increasing amount of corporations are choosing to make cloud based solutions a part of their operating model. ...

Tuesday 5 June 2012

InfoWorld : Cloud Monitoring

As the monitoring software vendors debate just how much to bridge the gap between test software and the working system, the lines will continue to blur as they automate the responses to the tests. The monitoring system is morphing into a management system. The most common change is adding or subtracting servers as the load changes. If the tests show that response times are slowing, the test systems can trigger the creation of new servers from the cloud without waiting for an administrator to make a decision.

Download the pdf here : AST-0061733_Cloud_Monitoring



Sponsor: Boundary

SOURCE

Monday 4 June 2012

yoyoclouds: Sharing CentOS Files with Remote Windows Systems- ...


Although Linux is increasingly making inroads into the desktop market, its origins are very much server based. It is not surprising therefore that Linux has the ability to act as a file server. It is also extremely common for Linux and Windows systems to be used side by side both in home and business environments.

It is a common requirement, therefore, that files on a Linux system be accessible to both Linux, UNIX and Windows based systems over network connections. Similarly, shared folders residing on Windows systems must also be accessible from CentOS systems.

Windows systems share resources such as file systems and printers using a protocol called Server Message Block (SMB). In order for a Linux system to serve such resources over a network to a Windows system and vice versa it must, therefore, support SMB. This is achieved using Linux based technology called Samba. In addition to providing integration between Linux and Windows systems, Samba may also be used to provide folder sharing between Linux systems.

In this tutorial we will look at the steps necessary to share file system resources and printers on a CentOS system with remote Windows and Linux systems.

Read the tutorial here ...



SOURCE

Saturday 2 June 2012

Seeding Torrents with Amazon S3 and s3cmd on Ubuntu

Again a nice post by  . Hope its useful for some of you out there...

Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the extensive documentation.

One of these features, which has been around for a very long time, is the ability to use AWS to seed (serve) downloadable files using the BitTorrent™ protocol. You don’t need to run EC2 instances and set up software. In fact, you don’t need to do anything except upload your files to S3 and make them publicly available.

Any file available for normal HTTP download in S3 is also available for download through a torrent. All you need to do is append the string ?torrent to the end of the URL and Amazon S3 takes care of the rest.

Steps

Let’s walk through uploading a file to S3 and accessing it with a torrent client using Ubuntu as our local system. This approach uses s3cmd to upload the file to S3, but any other S3 software can get the job done, too.
  1. Install the useful s3cmd tool and set up a configuration file for it. This is a one time step:
    sudo apt-get install s3cmd s3cmd --configure 
    The configure phase will prompt for your AWS access key id and AWS secret access key. These are stored in $HOME/.s3cmd which you should protect. You can press [Enter] for the encryption password and GPG program. I prefer “Yes” for using the HTTPS protocol, especially if I am using s3cmd from outside of EC2.
  2. Create an S3 bucket and upload the file with public access:
    bucket=YOURBUCKETNAME filename=FILETOUPLOAD basename=$(basename $filename) s3cmd mb s3://$bucket s3cmd put --acl-public $filename s3://$bucket/$basename 
  3. Display the URLs which can be used to access the file through normal web download and through a torrent:
    cat <<EOM web: http://$bucket.s3.amazonaws.com/$basename torrent: http://$bucket.s3.amazonaws.com/$basename?torrent EOM 

Notes

  1. The above process makes your file publicly available to anybody in the world. Don’t use this for anything you wish to keep private.
  2. You will pay standard S3 network charges for all downloads from S3 including the initial torrent seeding. You do not pay for network transfers between torrent peers once folks are serving the file chunks to each other.
  3. You cannot throttle the rate or frequency of downloads from S3. You can turn off access to prevent further downloads, but monitoring accesses and usage is not entirely real time.
  4. If your file is not popular enough for other torrent peers to be actively serving it, then every person who downloads it will transfer the entire content from S3’s torrent servers.
  5.  If people know what they are doing, they can easily remove “?torrent” and download the entire file direct from S3, perhaps resulting in a higher cost to you. So as a work-around just download the ?torrent URL, save the torrent file, and upload it back to S3 as a .torrent file. Share the torrent file itself, not the ?torrent URL. Since nobody will know the URL of the original file, they can only download it via the torrent.You don't even need to share the .torrent file using S3.
SOURCE

Thursday 31 May 2012

Cloud Computing: US Intelligence, Big Data, and the Cloud at Cloud Expo NY | Cloud Computing Journal

cloud computing: Simple Workflow Service - Amazon Adding One Enterp...

cloud computing: Simple Workflow Service - Amazon Adding One Enterp...:

Amazon has announced a new orchestration service called Simple Workflow Service . I would encourage you to read the announcement  on Werner's blog where he explains the need, rationale, and architecture.

cloud computing: 4 Big Data Myths - Part II

This is the second and the last part of this two-post series blog post on Big Data myths. If you haven't read the first part, check it check it in my previous blogs...

cloud computing: 4 Big Data Myths - Part II

cloud computing: 4 Big Data Myths - Part I



It was cloud then and it's Big Data now. Every time there's a new disruptive category it creates a lot of confusion. These categories are not well-defined. They just catch on. What hurts the most is the myths. This is the first part of my two-part series to debunk Big Data myths...

cloud computing: 4 Big Data Myths - Part I:

Tuesday 22 May 2012

Amazon CloudSearch-Information Retrieval as a Service

The idea of using computers to search for relevant pieces of information was popularized in the article “As We May Think” by Vannevar Bush in 1945.
As We May Think predicted (to some extent) many kinds of technology invented after its publication, including hypertext, personal computers, the Internet, the World Wide Web, speech recognition, and online encyclopedias such as Wikipedia: “Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.”
According to Wikipedia, Information retrieval (IR) is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web. There is overlap in the usage of the terms data retrieval, document retrieval, information retrieval, and text retrieval, but each also has its own body of literature, theory, praxis, and technologies.
Purpose of Information Retrieval: To find the desired content quickly and efficiently by simply consulting the index.
“News & Announcements” section in AWS Newsletter brings new surprise in terms of Amazon’s offering and the way they are expanding the domain every month. AWS users are not surprised about the surprise they are getting but its more in terms of what kind of offering will be targeted by Amazon remains surprise. In April 2012 AWS has come up with new offering that is Amazon CloudSearch.
Amazon CloudSearch offers a way to integrate search into websites and applications, whether they’re customer-facing or for use behind the corporate firewall. It’s the same search technology that’s available at Amazon.com.
Amazon CloudSearch is a fully-managed search service in the cloud that allows customers to easily integrate fast and highly scalable search functionality into their applications. Amazon CloudSearch effortlessly scales as the amount of searchable data increases or as the query rate changes, and developers can change search parameters, fine tune search relevance, and apply new settings at any time without having to upload the data again.
http://d36cz9buwru1tt.cloudfront.net/cloudsearch/CloudSearchScaling.png
http://d36cz9buwru1tt.cloudfront.net/cloudsearch/CloudSearchScaling.png
According to Amazon Web Services Blog,
“CloudSearch hides all of the complexity and all of the search infrastructure from you. You simply provide it with a set of documents and decide how you would like to incorporate search into your application.
You don’t have to write your own indexing, query parsing, query processing, results handling, or any of that other stuff. You don’t need to worry about running out of disk space or processing power, and you don’t need to keep rewriting your code to add more features.
With CloudSearch, you can focus on your application layer. You upload your documents, CloudSearch indexes them, and you can build a search experience that is custom-tailored to the needs of your customers.”

Architecture

Configuration Service: The configuration service enables you to create and configure search domains. Each domain encapsulates a collection of data you want to search.
  • Indexing Option specifies the field to include it is index
  • Text Options, to avoid words during indexing
  • Rank Expressions to determine how search results are ranked.
Document Service: to make changes to a domain’s searchable data.
Search Service: The search service handles search requests for a domain.

Features

Benefits

  • Offloads administrative burden of operating and scaling a search platform
  • No need to worry about hardware provisioning, data partitioning, or software patches; it will be taken care by service provider
  • Pay-as-you-go pricing with no up-front expenses

Pricing Dimensions

  • Search instances
  • Document batch uploads
  • Index Documents requests
  • Data transfer

Pricing

Search Instance Type
US East Region
Small Search Instance
$0.12 per hour
Large Search Instance
$0.48 per hour
Extra Large Search Instance
$0.68 per hour

Video Tutorials

Introducing Amazon CloudSearch
To see a summary of Amazon CloudSearch features, please watch this video.
Introducing Amazon CloudSearch
Building a Search Application Using Amazon CloudSearch
To see how to use Amazon CloudSearch to develop a search application, including uploading and indexing a large public data set, setting up index fields, customizing ranking, and embedding search in a sample application, please watch this video.
Building a Search Application Using Amazon CloudSearch


SOURCE : Amazon CloudSearch-Information Retrieval as a Service.