Posts Tagged ‘data’

Cloud Computing in 2013

February 27th, 2013 Comments off



How in the know about cloud computing are you? Does your business reap the benefits of cloud storage and desktop virtualisation? If not, perhaps you should take a look at this infographic, created by VESK. Cloud computing is a major buzzword in the world of IT at the moment and with good reason.

Cloud data storage and virtual desktops are the ideal solution for businesses to streamline IT infrastructure. They can dramatically reduce IT costs and provide enhanced security for your business’ sensitive data.

Cloud computing is not as new as you might think – we have been storing data in the cloud in one form or another for several years now, perhaps even without you realising it. As with any technology, cloud computing has developed incredibly quickly in recent years and we expect to see some dramatic developments in the coming year. So what are the predictions for cloud computing in 2013?

Take your head out of the clouds and find out why cloud computing is so important to businesses in 2013 with our easy to digest infographic.

This infographic was created by VESK – a UK company that specialises in virtual servers and hosted desktops.


Enhanced by Zemanta

Best Online Backup Services

February 17th, 2013 Comments off

Everything is stored in files and folders whether they are photos, videos, music or any sort of information. In the present times, people have grown to take care of their files as they highly face the threat of being deleted and inaccessible at times. People are beginning to save more and more from their precious lives into the computer therefore backup is constantly required to keep them safe at all times. The online backup facility helps the users to have managed or remote service to their files and folders. This requires special software in order to begin with the whole process. Many people hire professional to recover their lost data but they are at times unable to do so. This has further increased the demand for online backup facilities for people who cannot bear to lose their personal files and information.

The online backup saves many people from starting their work from scratch when they have to meet important deadlines or they have way too much data to recover back to their computers or any other devices. There are many online backup sites that specialize in providing these services to people in all parts of the world. Individuals who have learned a valuable lesson after losing important data must contact online backup sites in order to protect their data for long period of time from irregularities such as deletion, file corruption and most commonly, window corruption.

Some people try to back up their files manually and think it is adequate enough. However, they tend to face many problems in doing so. Firstly, they end up keeping the data which is backed up, in the same location on the computer whether it’s an external hard drive, USB drive or a CD. Also, in case of disastrous circumstances like floods, fires and earthquakes, the data will be thoroughly wiped out forever as the computer will be destroyed. Similarly, if people forget to back up their data for a while on their computer, they can end up losing valuable work and information which might even land them in trouble for a while.

Many online backup sites are available these days that can help in safekeeping all sorts of data within their servers. People are recommended to do an adequate amount of research before employing them to carry out this significant task in the future. The online backup sites must have easy software which is user friendly. This serves as an important factor when it comes to choosing the top online backup sites. They must also have the availability of syncing as multiple files need to be synced to the computer as at times they are rather important. Good space must be offered along with syncing as that decides the amount of files which can be kept within the data storage folder. One of the most necessary things to consider before choosing the best online backup sites is to look around and put its reputation to a test. Online consumer reviews and customer testimonials are the best ways to reveal this.

Amazon Redshift – Datawarehouse in the Clouds

February 16th, 2013 Comments off

Amazon announced Redshift this week. Actually, they announced the general availability. They announced that it was coming late last year.

Redshift is the new service that leverages the amazon AWS infrastructure so that you can deploy a data warehouse. I’m not yet convinced that I would want my production data warehouse on AWS, but I can really see the use in a dev and test environment, especially for integration testing.

According to Amazon: Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

A terabyte warehouse for less than $1,000 per year. That is fantastic. For one financial services firm were I created a 16TB warehouse, the price for hardware and database licensing was several million dollars. That was just startup costs. Renewing licenses per year ran into the 10s of thousands of dollars.

Redshift offers optimized query and IO performance for large workloads. They provide columnar storage, compression and parallelization to allow the service to scale to petabytes sizes.

I think one of the interesting specs is that it can use the standards Postgres drivers. I don’t see anywhere, yet, where they say specifically that this was built on Postgres, but I am inferring that.

Pricing starts at $0.85 per hour but with reserved pricing, you can get that down to $0.228 per hour. That brings it down to sub-$1000 per year. You just can’t compete with this on price in your own data center.

IF you want to scale to petabyte, you need to have petabyte in place. In your data center, that is going to cost you a fortune. Once again, AWS takes the first step into moving an entire architecture into the cloud. Is anyone else offering anything close to this?  I guess Oracle’s cloud offering is the closest, but, as far as I know, they are not promoting warehouse size instances yet.

Did I say it’s scalable?

Scalable – With a few clicks of the AWS Management Console or a simple API call, you can easily scale the number of nodes in your data warehouse up or down as your performance or capacity needs change. Amazon Redshift enables you to start with as little as a single 2TB XL node and scale up all the way to a hundred 16TB 8XL nodes for 1.6PB of compressed user data. Amazon Redshift will place your existing cluster into read-only mode, provision a new cluster of your chosen size, and then copy data from your old cluster to your new one in parallel. You can continue running queries against your old cluster while the new one is being provisioned. Once your data has been copied to your new cluster, Amazon Redshift will automatically redirect queries to your new cluster and remove the old cluster.

Redshift is SQL bases so you can access it with your normal tools. It is fully managed so backups and other admin concerns are automatic and automated. I’m not sure what tools you can use to design your database schemas. Since the database supports columnar data stores, I’m not sure what tools will build the tables. Your data is replicated around multiple nodes so your tool would need to be aware of that also.

You can also use Amazon RDS, map reduce or DymanoDB to source data. You can also pull data directly from S3. All in all, I’m pretty excited to see this offering. I hope I get a client who wants to take a shot at this. I like working on AWS anyway but I would love to work on a Redshift gig.




Using and Managing AWS – Part 6: SSH Key Pairs

May 26th, 2009 Comments off

Generate Your Keys

Now that you have chosen your instance, but before starting you actually start your instance, you need to generate your key pairs. The keypairs are SSH keypairs. A later post will explain SSH in greater detail but the keys come in a pair because there is both public and private components.

SSH is a Secure SHell. This is a command prompt like a DOS box or a telnet connection. However, unlike DOS and Telnet, it is very secure. The private key is the local machine’s secret password. The public key is shared to any host that the local machine will connect to.

The host is able to create a query after seeing the public key that only someone with the private key could answer. The private key is never shared but the host is convinced that it is talking to the person (or machine) that is says it is.

This may sound confusing but it is actually very secure. It’s is much better than passwords that can be hacked or accidentally given away.

Amazon supports SSH and secure communications out of the box. If you choose to revert to simple protocols such as telnet and ftp and to password authentication, you may do so. However, your first connection to any instance started through AWS will have to be via SSH. Amazon makes it easy to be secure but gives you the option of making it less secure.

So at least one pair of keys needs to be generated. Each tool set that you choose will create the files in a different way. If you are running the command line tools, you will run the ec2-add-keypair program. If running ElasticFox or CloudStudio, you will have a button on the GUI. However you create the keypair, the end result is that you will end up with a file that tends in a .pem format.

When running SSH (and the tools) from a Windows client, you will need to convert the .pem file to a PuTTY formatted key file. PuTTY, like SSH will be documented in greater detail in a near future post. Review that post for tips on Converting SSH to PuTTY.

You choose an instance’s keypair when you start it and you cannot change that after it is running. Generate your key pair and getting working first.

Using and Managing AWS – Part 5: Choosing a Machine Image

May 21st, 2009 Comments off

Choose an AMI

Amazon, and Amazon clients, are providing a huge variation of machine images. The short story is that you can choose between MS-Windows, Linux and Sun Solaris for your OS. The real story is that it is a bit more complicated than that.

The real question is what applications do you plan to run and what expertise do you have on hand or plan to hire? A quick example is a database like MySQL. MySQL runs on various operating systems. If you have Windows expertise, you may want to stick with windows. On the other hand, you can run some Linux instances with MySQL pre-installed and configured.

This about the stack that you want to run. I generally run Linux instances. They are a few cents cheaper per CPU hour and I am good enough with Linux that it doesn’t cause me any issues. I can run Oracle, MySQL and Postgres side-by-side. I occasionally do run Windows instances though just to compare offerings.

If you run SQL Server, you will need to run Windows. Almost any other software stack offers an option of OS. If you do run Windows, you will be running Windows Server 2003, in either 32 or 64 bit. SQL Server can be the Express Edition or the full blown commercial edition (which costs extra for licensing).

If you want to run Solaris, you currently have to register with Sun to get access to the OpenSolaris instance. It’s free but it requires registration. With OpenSolaris you get DTrace and ZFS, two selling points for many people.

You get OpenSolaris 2008.05 or Solaris Community Edition and pricing is the same as a Linux install. You can AMIs with AMP preinstalled as well as stacks like Drupal and MySQL.

For Linux installs, the choices are almost limitless: Fedora, CentOS, Ubuntu, Oracle Unbreakable Linux, RedHat. You name it, it’s probably there. Many of these come with pre-installed software stacks. No download and configure, just run.

Plan to try many instance types. You may even end up with a RightScale instance.