Access and Feeds

Serverless Computing: Barriers Still Exist Which Prevent More Widespread Adoption

By Dick Weisinger

Serverless computing is a kind of nirvana for software developers. It’s a world where there is no worry about operating systems and machine compatibility problems. There is no worry about OS patching and headaches of server maintenance. Upload your code and see it run. It is billed as a utility and you only pay for the machine cycles that you use. Easy scalability lets you scale up or down without needing to fiddle with anything on the server and hardware side.

Jeffrey Hammond, the vice president and principal analyst at Forrester, said that “if I just basically want to run my code and you worry about scaling it then a serverless approach is a very effective way to go. If I don’t want to worry about having to size my database, if I just want to be able to use it as I need it, serverless extensions for things like Aurora make that a lot easier. So basically as a developer, when I want to work at a higher level, when I have a very spiky workload, when I don’t particularly care to tune my infrastructure, I’d rather just focus on solving my business problem, a serverless approach is the way to go.” 

Despite the benefits of serverless, it has been slow to gain traction. Not all programming languages are currently supported by serverless frameworks, like AWS and Azure, so you’ll need to stick to the languages that are supported or use a wrapper around your code, but at a performance penalty. And since you no longer have visibility on the back end, debugging and measuring performance becomes much harder to do.

There also really aren’t any standards when it comes to serverless. Writing serverless methods on one platform effectively locks you into that platform and migration to something else comes at a cost of time and developer effort.

And, a final issue with serverless, is performance. Your code only runs on demand. That’s great, but it means if your code hasn’t been active for a while it won’t be in memory of the platform machine where it runs. It’s called the “cold start” problem. Latency due to getting the software to initialize to run will result in a performance hit.

Mike Loukides, vice president at O’Reilly Media, told InfoWorld that some of the performance issues with serverless are “due to technical issues that won’t go away. Designing systems that can tolerate lots of latency is a big architectural challenge. But it’s pretty clear that a fair number are making it work. Whether they have dealt with the architectural issues, ignored them, or have a use case where it doesn’t matter is an interesting question.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*