Diving Deep Into Serverless Architectures (1/2)

The DevOps FAUNCast - En podcast af FAUN

Kategorier:

This episode is sponsored by The Chief I/O. The Chief I/O serves Cloud-Native professionals with the knowledge and insights they need to build resilient and scalable systems and teams. Visit The Chief I/O, read our publication, and subscribe to our newsletter and RSS feed. You can also apply to become a writer. Visit www.thechief.io. In November 2017, The Register published an article, 'Lambda and serverless is one of the worst forms of proprietary lock-in we've ever seen in the history of humanity'. The article goes on and elaborates: "It's code that is tied not just to hardware – which we've seen before – but to a data center, you can't even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them. So literally, the application you write will never get the performance or responsiveness or the ability to be ported somewhere else without having the deployment footprint of Amazon." What happened next was nothing short of spectacular. Well known figures in the Cloud computing space such as John Arundel, Forrest Brazeal, Yan Cui started to have diverging opinions. Yan Cui is known for his serverless articles in medium and his blog. In an article published in lumigo.com titled “You are wrong about vendor lock-in” he wrote: The biggest misconception about serverless vendor lock-in arguments is that technology choices are never lock-ins. Being “locked in” implies that there is no escape, but that’s not the case with technology choices. Not with serverless, not with databases, not with frameworks, not with programming languages. Instead, technology choices create coupling, and no matter the choices you make, your solution will always be coupled to something. Moving away from those technologies requires time and effort, but there is always a way out. I'm your host Kassandra Russel, and today we are going to discuss serverless architectures. We will examine arguments for and against this technology. Next, we will discuss architectures, triggers, and use cases for serverless. Most importantly, we will discuss how to get your serverless functions productionized. This episode is the first part of a series about Serverless; more topics will be discussed in the next episodes. If you are thinking about adopting serverless or if you are already using it, this episode will give you useful insights, so stay tuned. Computing started with bare metal servers, then with virtual machines and later with containers and distributed systems. However, in 2006 a product called Zimki offered the first functions as a Service. This allowed a “pay as you go” model for code execution. Zimki was not commercially successful, but it paved the way for a new business model for computing services. Because of Zimki Functions as a Service or FaaS became a new category in the cloud space. In 2008, Google offered Google App Engine, App Engine allowed “metered billing” for applications. This new offering from Google allowed developers to create functions using a custom Python Framework. The limitation of this is glaringly obvious, developers were not able to execute arbitrary code. In November of 2014, AWS officially announced AWS Lambda. A fully-fledged Functions as a Service Platform that allowed the execution of arbitrary code. In our DevOps weekly newsletter, Shipped, we curate must-read serverless tutorials, news, and stories.  Each week, there are tons of articles published, we read them for you, choose the best ones and share them with you. You can subscribe to Shipped by visiting faun.dev/join. --- Support this podcast: https://podcasters.spotify.com/pod/show/thedevopsfauncast/support

Visit the podcast's native language site