09 March 2016 5:20PM

In the past years, the Amazon cloud already made us very happy with the ability to use auto-scaled clustering of EC2 server instances with a load balancer on top. This means that if one of your servers goes down, a new one can be initiated automatically. The same happens when unexpectedly high peaks of traffic occur. Extra servers will then be initiated.

Although this is pretty cool, there are disadvantages.

  • First of all, you need to keep a minimum number of server instances alive to be able to serve any visitor, even if you don't have any traffic - or revenue. This costs money.
  • Secondly, because cloud instances run installed software on top of operating systems, you need not only maintain your own code, but also make sure the server software stays up to date and operational.
  • Third, you can't scale up or down in a granular way, but only one whole server at a time.

In an architecture based on an API with micro-services, this means a lot of overhead for small tasks. Luckily, there are now options to solve this.

Lambda and API Gateway are two such options.

Managing these gets to be a little encumbersome at time however the serverless framework is a key feature to managing your Lambda and API Gateway deployments in AWS.

Check it out here: www.serverless.com.

We're always happy to help consult you on your serverless requirements.