Left of the Dot Goes to AWS Reinvent 2013 in Las Vegas!

Amazon AWS In Las Vegas!

As the IT Guy at Left of the Dot, every once and and a while I get to do something fun that doesn’t involve sitting behind a computer.  This last month I got to visit Las Vegas for the first time to attend the Amazon AWS Reinvent, the yearly conference to talk about and learn all about the Amazon Web Services Platform.  As a company we want our websites to be as robust and strong as they can be, and one of the challenges that all tech companies face these days is how to make sure that your site can stand up to becoming popular.  Not just a little popular, but a lot popular.  If your site can’t deliver content because your server is melting down after hitting the front page of Reddit, you’re in trouble.
As a company we built a fair numbers of sites using “traditional” hosting simply because that’s what we were used to.  Lately though, we’ve started looking at this “cloud” stuff a bit more seriously, and dipped our toes into using Amazon’s Web Services for some of our needs.  We quickly realized that we didn’t know as much as we needed to to be effective, so I was sent to Vegas to learn all I could learn.
Here are some things I brought back from a week at the conference:

  • First of all as a photographer, the city was an amazing treasure trove of photographic opportunities, interesting people, and amazing sights.  I’d love to spend a month there to get to come back to the interesting places at different times of day and really integrate myself into the city and not just the area between the hotel and the conference.
  • There is a lot more smoking there than I’m used to.  I’m not sure if it’s a Vegas thing or a Big City thing, but it’s a big change to wander through the hotel lobby and see that many cigarettes going at once.
  • The city itself is very clean, while there are homeless people hanging out along the over passes, the city itself is clean and polished from the bannister rails to the hanging pictures selling $5,000 watches.
  • If you want to go anywhere you have to go through the casino.  I as in the Wynn Encore (one of the hotels the conference had a deal with), so I was right on the strip, but to get to or from the conference or to or from the hotel room, or heck, to a restaurant to buy a $26 burger, you went through a twisty maze of passages passing you by about 500 opportunistic slot machines.

Las Vegas

  • Speaking of $26 hamburgers, everything is expensive.  A plain black coffee was $3.50 (thank heavens Amazon provided free coffee to attendees!), and if you wanted fries with your $26 burger you paid an extra $10 for them.  Obviously on the strip everything is more expensive, but still…
  • I got to experience my first Las Vegas show!  Circ De Solie’s “Mystere”, which can be best described as a stage throwing SPECTACLE all over you.  It was pretty amazing (though I really have nothing to compare it to).

Ok, on to the stuff that’s actually relevant.
Amazon’s AWS is split into a few different parts.  EC2 is their elastic cloud computing component, think virtual machines.  Cloud Front is a Content Delivery Network, RDS is a hosted database backend, S3 is storage hosting (this is the one that most people are probably familiar with).  The trick is learning as much as possible about the system overall as a whole to let you use the individual components more efficiently.
Las Vegas
For example, we’re using the EC2 side of the system right now with the old school thinking.  EC2 shouldn’t be thought of as a drop in replacement for traditional hosting, where you create a server (an “instance”), log in, configure your system, upload your data, reboot for updates, and so on.  That is completely missing out on the magic and “elastic” part of cloud computing.  Instead we have to think of our EC2 instances as replaceable, destroyable, computing units.  The magic of the elastic computing is the ability to scale.  When your server starts getting a lot of load on it, the system (using the Elastic Load Balancer) simple starts a new instance (or 10 new instances), which is pre-configured to start, connect to your database and resources, and then take on some of the load.  When the load goes down (or you drop off the front page of Reddit) your virtual machines are destroyed, or Terminated in EC2 parlance.  If you are used to traditional hosting this is weird with a capital W.  This is like someone coming into your data center, ripping a machine off the rack, and tossing it in the garbage.
But if you architect your system to deal with this, and use some of the built in magic to auto-configure your machines when they start up, and store your user-uploaded resources in the Amazon storage hosting, and your database on the Amazon remote database, suddenly you can have a system where your hosting machines can simple come into being and be terminated on a whim.
It also leads to a different way of thinking.  Lets say you have a huge import process that’s run weekly, or daily.  It takes a lot of resources for refreshing a product list from a partner site involving a bunch of computation and hoops jumped through and takes 10 hours every weekend.  Why have this take 10 hours when you can simply (well, for various definitions of “simple”) have a script that starts up a super high memory, super high computing power machine with the code needed on it, connect to the database, download the update from the partner, spend 1 hour cranking through this at full power, save the results and then shut down the machine.  Running a 10 computers for 1 hour costs the same as 1 for 10 hours in the cloud.  It’s really a different way of thinking.
Las Vegas
Now the challenge is how to move our traditional systems to this new way of thinking.  We’re already taking strides here, and have a beta version of Oahu.com with a CDN in front, a load balanced set of severs, and a fully baked server instance that is configured to work the same and allow the site to run regardless of if there’s one copy running or ten.  This took about 2 days of work to setup, and most of that time was fighting with the CDN setup, which turned out to be as simple as one of the components having the wrong name.  Over the next week we’ll update the main site to use this new technology, and then roll it out to other sites. [Editor’s note: stay tuned for a brand new Oahu.com early in 2014]
All this isn’t even using a tenth of the power that AWS provides.  There’s an identity management system to grant access to users and resources in a controlled manner called IAM, there’s the ability to use tags to allow servers and services to auto-configure themselves based on tags such as “webserver” or “devdatabase” or “firewall”.
The conference was mostly consisting of hour-long sessions led by Amazon employees doing talks about the various components, business strategies, architecture, or deep dives into the tech.  There were also some really illuminating sessions from companies that are on the AWS bandwagon.  A memorable one was done by Loggly called Unmeltable Infrastructure at Scale.
The conference ended with a huge party with Deadmau5 as the DJ, food, booze, and a ton of fun geeky stuff like arcade games, a laser obstacle course, a trivia challenge station, and some giant board games.  It was an absolute blast and I’m still reeling processing the amount of new information.  Luckily I have the support of my bosses to have the freedom to implement “cool stuff” as long as it makes “our stuff work better”, as well as a hardworking and brilliant set of developers that I can depend on to implement (and question) these new technologies.