is now [learn more]
PODCAST

Going serverless with Adnan Rahic

Transcript

Kevin Montalbo 

Serverless computing is growing in popularity and is heavily promoted by public cloud providers. The most doubted benefit of serverless computing is to allow developers to focus on their code while the public cloud provider manages the environment and infrastructure that will be running it. But how is serverless different from container based services? What are the best use cases for serverless? How about the challenges and can this architecture move forward in the future? We answer these questions and more in this episode of Coding Over Cocktails.


Aaren Quiambao

Welcome to Coding Over Cocktails, a podcast by Toro cloud. Here we talk about digital transformation, application integration, low code, application, development, data management, and business process automation. Catch some expert insights as we sit down with industry leaders who share tips on how enterprises can take on the challenge of digital transformation. Take a seat. Join us for a round. Here are your hosts, Kevin Montalbo and Toro cloud CEO and founder David Brown.


Kevin Montalbo

All right. Joining us all the way from Australia is Toro cloud con founder David Brown. Hi, David. How have you been?


David Brown

Good day, Kevin. I'm very well. You?


Kevin Montalbo

I'm great and our guest for today is a developer evangelist at Sematext.com A SaaS based out of Brooklyn, New York. He is a passionate teacher helping people embrace software development and healthy Devops practices in various venues. Since 2017. He's also the author of Node JS Monitoring the complete Guide and has several published articles, programming tutorials and courses under his belt, founded websites such as free code cap hacker noon medium and DEV dot to. He's now here with us to share his expertise on serverless computing. Joining us for a round of cocktails, Adnan Rahic. Hey, Adnan, great to have you on the podcast.


Adnan Rahic

Hey, good to be here.


Kevin Montalbo

All right. So let's dive right in, in our previous podcast, we have often discussed Cuber neti and container based approaches to microservices. Can you briefly explain to us how serverless is different from container based services?


Adnan Rahic

Yeah, for sure. I mean, when you think about it with containers, you get a package where your code runs. So it's basically you package your code into an executable executable and then you run this on an infrastructure, right?And they're quite logically called containers because of this. Butwith cus you, you don't really get that with cus you just deploy your code directly to the cloud provider and then the cloud provider handles everything from there.You don't really care about the dependencies. You don't really care about the run time or, or anything like that. You just let the cloud provider handle all of that for you. Whilst with containers, you kind of have to, you know, you have to package all of those things within that container. So you have to figure out, ok. So I need to package run time that the dependencies, I need to manage all of that. I need to make sure that's all running correctly.But you know, having this service approach, it kinda makes it easy in one sense. 

But it's alsoit can also be very complex in another sense because if you overdo it,it gets really hard to manage all of that complexity.And then, and when you think about it can also reduce complexity because if you have a huge Cober neti cluster, for example, or a huge monolith.and then you have things like Cron jobs or email services or things that aren't really related to the core functionality of your, of your actual cluster or of your product, you can then cut those pieces out into several functions that would basically be isolated. So if you know how to use it correctly or if you have a very good sense of how to get the best out of it, then it makes sense.But it's not, I mean, it's not a silver bullet as anything like you have to figure out the best use case. And then based on that, what it's kind of intended to be used as if that makes any sense.


David Brown

Yeah. Good stuff. I mean, we'd like to get to the use cases and some of the challenges and complexities you mentioned in a minute before we get onto that. I mean, cuss is often rementioned in reference to functions as a, as a service, but cus is broader than that, right? So it's encompassing more than just functions as a service.


Adnan Rahic

yeah, definitely. Definitely basically anything that doesn't require a server can be considered as serverless, right? But only functions as a service. That's a, that's a subset you can call it basically if you think about services like LAMBDA or Azure functions or, or things like that. Those are all FAA si mean the fast we call them function as a service where you have this we have this service where you can deploy your code, hook it up to an event, trigger something, triggers that code and you know it runs something happens and you get a return value which is basically what you want. And that's just one subset of having serverless or using serverless.If you think about it, like if you're running a website, a sim super simple static website on S3 that serve as well. 

Are you managing a server? No, you have three years, slap your files in there and you hook it up to a domain and serve, right? So it's very vague as in what it can be defined as, but it's also, it's very loose in a way where if you're running a website on Net five and you're hooking up an API to some lambda functions or using services like for cell or, or you're just running it by yourself on Aws LAMDA and S3. All of those things could be considered true because, I mean, have you ever touched an easy two instance? Not really. No. Right. So, I mean, it can still be considered that way. I know a lot of people that are like hardcore,, like purists, they're gonna say,, you're, this is so, so weird. So, I mean, maybe, yeah, maybe. No, it's just that in the end, I mean, whatever floats your boat. 

Yeah, I mean, whatever the point of service is to make it simple to make it easy for people that don't need to manage infrastructure. I mean,,, hypothetically if I'm a, you know, if I'm a startup founder, I don't really wanna care about managing containers and, and, and instances and running them in front and hooking all of these things up and getting like a, like a really large bill for something. I mean, I, I don't really need that if I have, you know, if I'm making a ton of money and then I need to employ like, tons of people to, like, run that. So I don't have downtime to. Sure. Yeah, I mean, that's, that's the next logical step. But if I'm not,


David Brown

well, there's plenty of managed services for containers as well. So, you know, so manage Kubernetes and you know, as you say, manage virtual servers through 82 or container based services as well. So there's plenty of opportunity for managed infrastructure and containers, but I guess that sort of starts leading us down the path. And I guess one thing we want to clarify sometimes when we're talking about best use cases or complexities or challenges, we're actually talking about functions and services. 

We're talking about that subset. So I think we just need to clarify that, but let's maybe talk about some of the best use cases for surplus then. Soyou said, you know, it depends on the use cases to when you use service, when you use micro services and container based technologies. So let's run through some of that. Some of the differentiations between service and microservices aren't based on containers.


Adnan Rahic

Yeah, for sure. I mean, to keep it simple, anything that requires like a persistent database connection or requires many database connections, especially to, to relational databases like progress or, or QL whatever.Yeah, just don't like ju just, just, just like skip the fast altogether. Unless you haveI I, if I go like really technical in it, unless you have like a proxy, API that hooks into your database,then it's fine, but that requires another, another layer of complexity that often you don't really want, except if that's a use case that you're OK with.because the problem with functions is that if you run one function, that's basically one API. And if you think about it, that one API needs a connection to the database. And if you're having, if you're scaling out and you have thousands of functions, then you have thousands of connections to your database. And that's just, that's just like an accident waiting to happen. 

That's just running with scissors, right? You don't really like why you don't want to do that. It's unnecessary load on the database, it's unnecessary connections, multiple points of failure, multiple points of breaches. So, I mean, you just don't, don't really want to do that, right?Unless you're using a database that's a service as well that hooks into that, that fast ecosystem like Aws has DIMO DB which works fine. Azure has the document DB. I don't really know what it's called. So any service that can, that can hook into it, it's fine.But that also increases, I mean, you get vendor lock in there. So if you want to move away from that, you're gonna, you know, have a pain.And basically anything that goes with that. So I reckon if you have database connections to figure something else out. But anything else that has to do with, basically you can think of it as sidecars. So if you have Cron jobs that are running, you don't really need to run that in your core Infra. Like if you have a core server that handles your main API is your main database, database handling, whatever.You don't really need to run those Cron jobs there. You can just like fire a Lambda, right? Or if you have email services or any type of type of service, an API that you can extract from your core core product. 

Great, because you have that one less thing to think about and that's gonna be less of a load on your entire system. So regard those things. Amazing. That's absolutely great. I've like one example is I built an email service to get triggered through a Lambda function and another few services through aws that when somebody types in a form, I get emailed that response or that question and then I can just email back that person through any email client. So, and, and I that's not running anywhere like that's not running on a server that's not taking up any space or, or any like mental capacity for myself to have to like focus on, I should get that running and, and keeping it running. It's just there in a, in a function in, in my account in Aws. So, things like that are absolutely amazing because it takes away all of the stress of having to manage it.Unless it's databases you like, you don't wanna, you don't wanna go into that wormhole.It,


David Brown

what about managing it at scale though? So I get the chrome job thing or infrequently run services or functions, then you don't necessarily want a server sitting there idle most of the time. If it's only gonna be running that function every five minutes or every hour. A cus makes a perfect use case for that. But what about when you're doing it at scale? Is the server still make sense when you're running hundreds of thousands of transactions per second?


Adnan Rahic

Yeah, it can, it canjust because it can, it can scale so effortlessly. So if you think about a use case on, if you have a, a function on, on AWS, if you get 1000 concurrent connections in the same millisecond to that one API, it's gonna scale horizontally to 1000 functions right away. So you're not gonna getyou're not gonna get this like this typical type of latency you would get on a standard API like on a, on a server or whatever. 

So that's a good use case, but that would also mean that it's gonna cost a ton like it's gonna cost a lot of money. So if you, if you're like a big corporation and having that type of flexibility is something that you want and you don't really care about the price that could. Um, but for, you know, for the majority of people it, that, that can often be a problem. Um, but going to the latency that I think that would also be a really, really interesting topic to cover. Um, because once those 1000 functions get instantiated and run concurrently, every single one of them is gonna have a start up latency because that initial request kinda know it needs to grease the engine a bit, you know, it needs towarm up.That's another issue.


David Brown

It is one of the biggest challenges most frequently mentioned associated with a surplus computing functions as a service. But just, just explain this concept of this, you know, the warm up process and, and firing up a new function that on that first use.


Adnan Rahic

Yeah, I mean, we call it like in the in the service community, it's called cold start, which kind of makes sense becauseit is cold. It's not like the instance of the function isn't there. Once you're calling it the initial time, let's say you have an event that's an API and that event will trigger your, your code that's in the function I, that's like the instance of this function doesn't exist anywhere. So you have to call it the initial, the initial time to actually tell AWS AO.

Can you just like, make sure this package exists somewhere, then they package it up, put it in, you know, a virtual server or whatever they do. Like I have no idea what happens, which is kind of the point.And then that runs and that's gonna take an initial set of, I don't know, 200 milliseconds to 56. It doesn't really, it kind of depends on what you're, what you're running.But you're always going to have that initial initial latency, which is called the cold start. Now, the problem there is like, there's no way like to go around that, like there's no way to bypass that, per se. You can do some things that are maybe not always considered best practices, but there are like hacks that people do use and, and one would be, you can just periodically trigger the function to keep it warm, quote unquote warm,, which is ok ish.

But again, if you have 500 concurrent connections right away and you're keeping one function warm that, I mean, it's not doing much, right, you're still gonna get 499 cold starts, right? Um, so you also have to figure out,, like peak times for when you're gonna expect traffic when you're not gonna expect traffic, which is hypothetically ok. But like, practically pretty much impossible to always be on point regarding that. Um, but otherwise, I mean, there's not much you can do, you can, you can keep a set of functions warm. But, you know, in the end,


David Brown

I'm guessing the cold start problem is compounded by, in some cases, the language of choice as well. I'm guessing A no Js server is gonna execute a function a lot faster than Java server, which typically has a, you need to warm up the JVM itself. Once the, once the, the function has, has been, you know, started from its cold start, then the JVM needs to be typically warmed up before it's starting to serve requests quickly as well. So is language coming to this as well? You, you, you need several less choice.


Adnan Rahic

It does, but it's not that big of a difference. So the way that at least I know for Lambda, the way it works is that aws packages this code into like a doer image where it's not a doer image, but it's a container per se, it's a container image.So the run time gets packaged into this image as well.But it's a major difference whether you have no runtime at all and are running a Golan as a language which just an executable doesn't need anything you just run it versus something like node or Python or Java. 

So definitely having a language that doesn't need such a big start or doesn't have the big warm up process is better.But the end is not that big of a difference. It's not, it's not like seconds difference. It's, it's maybe in the hundreds of milliseconds difference, which is,, for most people is acceptable. But again, if you have those like margins that need to be hit and it's not really acceptable. Ok.


David Brown

So we, we've mentioned a couple of things you, you mentioned, there's potentially a cost penalty associated with cus,, when you're looking at scale and there's the cold start issue which as you say is only an issue if it's very infrequently run if and there's probably possibly ways around that, although that they have disadvantages as well, any other challenges that people should be aware of associated with surplus before we go into their great use, use cases as well and their advantages


Adnan Rahic

for sure. I mean, if you wanna talk about generally developer experience and, and how easy something is to, to just to build and to integrate or, or whatever.Then yeah, that the barrier, the barrier of entry for using C as a, as a developer, it can be pretty huge, especially if you haven't done something, something like that before or haven't done any event driven.because it's a whole new concept of, of development, right? 

You have to think outside of the stereo stereotypical, but I have to think of outside the typical box of development because the typical way is like OK, I run the server on my local machine. I do some changes, I hit reload or whatever and I see the changes and I can, you know, figure out how to do what, whether I'm running node or it doesn't really matter. I personally, I'm a node, node Jazz developer. So I canI can compare that. But if you're running it in service is like, yeah, you have this typical DEV environment that you kind of can run, but there's no way of simulating a Lambda function like that, you can't really do that, right. 

And that's like the main, the main issue is that it's, it's, you have to run multiple environments for testing and for development in Aws in the actual cloud to get a proper, like proper sense of what is what's gonna happen in production as well.And then that means that if you're not doing test driven development, if you're not running unit tests for the code, it's gonna be a pain, it's gonna be absolutely horrible. And thing and things like that. But luckily AWS figured out a way they recently released a like a container runtime something, something I can't remember. They always have freaking hard names for like it's weird names for stuff. I don't know why. And they, they figured


David Brown

We don't need to, America services necessarily anyway. It's like people will find that. 


Adnan Rahic

So yeah, so what it did was is that they added this, this feature where you can basically you can build the container image yourself and then you can push that container image, the actual lambda container image. And then you can push that and then you can hook that into lambda. And then if you want to, you can run that image on your local machine like through Docker like like any other container. So that gives you the opportunity. 

So actually test the live version like the production version of the function before you push it, which is like for me when that happens like, thanks. Thank you God. And like every so that was like that was a breakthrough. I mean, I think that's gonna be, that's gonna be the goal is like if you, if you, so one example is that we have CNCF for, for Kubernetes and all of the tooling that goes with Kubernetes. 

If we could, if we could like as a community have a similar thing for for services and get this like this one norm of how we do things and this one path of how we could have both like scalability, ease of use, developer productivity if we could have monitoring and log management as well, because that monitoring log management is a pain in containers, let alone in service, right? So if we had we could have like one standardized flow for all of that. I mean, that's just that would be so amazing if we could go that path. That would be so amazing. I forgot the initial questions.


David Brown

So I, I think you just answered some of the, some of the questions we're gonna ask you later in terms of where you see Cerus evolving and what would you like to see in the future? So that, you know, I think you just iterated through the number of things you'd like to see in the future for Cerus.One of theissues associated with surplus functions is they typically have a run time limit, right. So, you know, if it doesn't execute within X number of seconds, then the function is terminated prematurely.


Adnan Rahic

Am I? Right? That's for, that's right.The thing is, is that OK?


David Brown

So how do you monitor that? Is that one of the challenges as a developer as well? And, and how, how do you control this? Like, do you, how do you know whether it's your code or if it's the server or when things are filing unexpected?


Adnan Rahic

Right. Right. That's a, that's a really good, that's a really good topic here. Back up until I think it was last year,the run time limit for lambda functions in, in a Aws was five minutes.And they pushed that up to 15 now,which is, I mean, if you, if you're running something for 15 minutes and it's, you know, it's not executing after 15 minutes.I, I mean, that's kind of bad. Like, let's just say that's kind of not good. But on the other side, like I myself as a developer, I don't really want a function to run for 15 minutes. I mean, if I have some like data intensive calculation thing going on. 

Yeah, I mean, fine but like, I don't wanna keep it open for 15 minutes. Why would I, why would I even wanna do that? So the ideal, the ideal part or like the ideal way of doing this, that is also the best practice in, in, in the community is that if you have something that's going to run for that long chain, the functions because if you're like, you think about it from a logical standpoint as an engineer as a like when you're writing, you're writing a, a product or, or software like you don't want one function to do like a ton of things, you want functions to be modular, right? If you, especially if you're writing code like languages like that are functional like I don't know freaking or language or something, something I used to, I like, I like, I like writing javascript pure, like as functional as it gets because it it reduces the complexity and the the mental strain. So from that background, I have, I don't want my lambda functions to do 10 different things. I want one lambda function to do one thing, return the value and you know, move on, right. 

So you should definitely, you should follow that,, when you're doing the Lambda functions as well because you want to change the functions and you can do that really easily with services that you get through Aws, you can pipe them through Q through cues or through the fire hose, Kis or whatever. It's all weird names. But you know, they make ends meet where you have one value from one function. You push that to a queue. The next function is listening to that QE event picks up that value and then does the next thing. So ideally every function should be a few seconds, right? And then you get the value at the end, it can be, yes, one function can be a bit longer like five minutes or whatever. But I mean, if you, if you set up your architecture correctly that way and it's fine, you're not gonna have that many issues. But yeah, I do understand, I do understand that the initial problem with the execution run time. But in the end, if you really need to run something more than 15 minutes, probably using a server is gonna be cheaper and more efficient. So that's also


David Brown

how do you, how do you manage the complexity when you have thousands of functions in, in, in micro services? We have service discovery, you know, microservice will say to the gateway or certis, hey, I'm here here, here in my end points. This is, this is the service that is available and so you're aware of it, something is aware of it and what it can do and can route requests to it. You know, if, if it gets a request for that, for that particular microservice, if you have thousands of serverless functions, if you're sitting there idle and can be executed if an event triggers them, how do you manage that complexity? And does it become sort of an unmanaged web? It's of functions with, with no sort of


Adnan Rahic

you mean you manage it very poorly? It's not really, isn't. I mean, jokes aside, there's no, no really, no, no good way of doing it.That's, that's where we have the problem with Cinetis. It's, it's mature enough that you get the service discovery and you can, you can see what's happening. You have tooling that, that are open source for both monitoring and log management, which is awesome. And you, you see what's happening but the problem with service now if you have thousands of functions, yeah, you can see them like in your Aws console, like you can, you can see, but there's no real way of getting this overview right? There's no, you can check the logs as well. That's fine. 

But that doesn't give you this like serviced overview and for, you know, if you want to get the service overview, you need to use like a third party tools, a product, whatever for monitoring. I mean, yeah, there are a few out there that you could use that are, I mean, really well funded. They've been around for a few years so they have really good leadership. Like the C level exec executives are really competent people. I know a few of them as well and I can like vouch that are super, super super talented people.But then again, they're all like separate tools for separate SASS where so we don't have that one unified way that we can all agree on. Like, yeah, let's use this and make this the best possible way of getting this overview of getting the logs in getting the metrics in.


David Brown

Yeah. What about a Cus industry alliance? Is there such a thing where there's governing bodies trying to drive standards and adoption?


Adnan Rahic

Yeah. As far as I know there is no such thing, I might be a bit outdated but that would be something definitely we should, we should try pushing towards.we do have some like serverless tooling inside of CNCF that, that run on Cuban tis Open fast is one of them, we have Cub List or something like I think it's called Cub list, which basically that's a set of tools that you can basically set up functions inside of your Kubernetes cluster. 

But they're not, I mean, very, very rarely used if you compare it to Docker or just containers in general inside CTIS, which does make sense because if you're already running the complex con container environment, then why would you also run the complex fast and environment inside of that kubernetes environment? So it gets complicated really quickly.But yeah, having what you said, what you mentioned that just like, let's say like a collective or an open collective or something that will get people to, you know, push the same things the same needs. I think that would be like really fucking, really freaking awesome. That'd be so cool. Like I'm already, if we actually get that going,


David Brown

sounds like an opportunity for you.


Adnan Rahic

I should maybe start another start up, but the start up is just like hyping other start ups to do this one thing. shit.


David Brown

Lookwith, with the service, you're obviously very much reliant on a public cloud provider, noone, noone sets up their own cus infrastructure, right? So we're talking about, you know, typically, the big three public cloud providers providing some sort of infrastructure to support cus. So is that a little bit, you know, in some respect, I'm kind of going to answer my own question because we're very much reliant on cloud public cloud providers for a whole bunch of things now including Kubernetes and microservices as well. 

But when you have a serverless infrastructure where you simply cannot see even an underlying VM or, or containers being spun up and, and spun down and, and that, that, that infrastructure is completely hidden from you. Is that how do you manage downtime? How, how do you manage maintenance periods? How do you do? You just hope that the vendor gets this right and can able to transition your code? Is this even a problem associated with maintaining surplus infrastructures?


Adnan Rahic

I mean, step one is like, really praying a lot.No, I'm kidding., not really kidding. But seriously though, when you think about it, I mean, I've worked with start ups,, before my career and right now, like, I work at a monitoring SaaS right now and we run stuff on AWS. Um, do you wanna do, want to take a wild guess how many times we have had downtime because of AWS? And how many times we've had downtime due to human error, human error? 100% of the time never has something weird happened to AWS that we had downtown because of them. 

And if like, if we as a major company that has, you know, a number of employees that runs so many services and spend so much money on a AWS has no problem, like somebody that spends, like, not no money at all or somebody that's like Coca Cola type of huge corporation. I don't think you're gonna have any problems with running anything on AWS. Um, but yeah,


David Brown

Once again, of course you can mitigate the issue with, yeah, have setting up functions in different ways as well.


Adnan Rahic

And, and one thing that's super, super nice with for a service and that setting up the server functions is that you have, I think a w even calls it like edge provider something like that where basically if you deploy that one function, it gets copied into all of these regions, all of these different availability zones, meaning that not only will this be good for the end user because if they hit the API it's going to trigger the function closest to them. 

So if they're in Singapore, they're going to get the incident in Singapore. If they're in the, you know, the east coast in the US, they're going to get the one that's running in the east, in the east coast in Virginia. So, I mean, those are all things, but also if one fails, I like all 12 of 12 or like 13 of them, them are not going to fail at the same time. So they always much big problem. Yeah, because if all of those things happen, it probably like a world ending event.So it's rather aliens landing or something. So,


David Brown

All right. So look, look what's, what if you could have a wish list. So, you know, you've been, you've written books on servers and, and you've been working with it for some time now,, if you could have a wish list of the things you would like to see and, whether it be tool kits or infrastructure or, or governing bodies and, and standards give us a quick rundown. What would that wish list look like?


Adnan Rahic

Yeah, I mean, better tooling is for sure. I think better tooling. I mean, I work at a, at a service sorry monitoring A and we're currently working on making better tooling, but it's so hard when you don't have this one unified like body governing. What, what, what like what the community wants, what we need, what we like, what we're striving for. So very definitely having monitoring tooling is number one. Also if we could get the cloud providers to get to be unified about the APIs they use and that the way they run the service functions, that would be absolutely amazing, then we can have a unified way of gathering stuff like logs because right now like you have to like bake in your own like log collection enrichment shipping type tool that's gonna run inside of this service environment, extract this data, extract the logs, extract the metrics from the run from all of the execution with whatever is happening and then kind of package that and send it somewhere. So it's, it's, you know, and I know this because I mean, I, I've like, I've built this before, like this freaking log collection to tool, we have that in, in our, in our product. 

And it's a real pain to build, right. It's not an easy, it's not a straightforward thing to do. Um, and if we could get like, all of the, all of us to work on the same problem and, and like, work on the same solution to that problem, then it's gonna be much easier because if we all put our brains in the same place, I think it's gonna be great.But right now we have like, tons of people trying to solve the same problem in like 50 in different ways. Right. And that's, I think it's just like we're, we're all, I mean, hopefully very intelligent people, many more intelligent than me. I'm not that smart.But then if we, like, actually put all of our brains in the same place and it's going to be, you know, it's going to be a better impact, it's going to be more of an impact. Um, because if you think about it, like in the, in C Benetti's that we have Prometheus, we have like fluent bit, all of those tools are like part of the CNCF.And you know, that, those, those folks, they support that, they push that and everything that goes into CTIS as a service or as a, you know, as a tool that's, that just works. And if we could get that as well,if we could get like a, a foundation or whatever for C I think that's gonna be the thing, you know, the, the, just the, the bomb as kids call it nowadays.


David Brown

Awesome. Adnan. You've mentioned the SaaS company., you've,, you worked for a few times without mentioning the name. Can you tell us the company you're working for?


Adnan Rahic

It's Sematext.com. And, you know, I've been there for, I think a few years now. It was my, I, I'm not going to call it like, first proper job because I only did start up and, and like, consulting freelancing and then I just got into, got into like this real job and I have to say, like, having a normal job is super nice. It's like not that stress free. Yeah, it's like, it's not that stressful. I mean, you have like, normal hours. I mean, I even, I even started going to the gym and, like, working out. I have a social life now is, which is like, what? So, yeah, I think they knew. Yeah, it's like, it's like a new person. Like, I don't have back pain in the morning when I wake up, you know, because I actually like exercise except for walking. 

Yes, all of those things are, you know, freaking awesome. But yeah, but if you wanna, if you wanna like the shout out for, for where I work or if you like if you want to work with me, we have a few job openings as well. I promise I will send funny memes in slack all the time. You do not want that. I'm very sorry you're going to get it anyway. So yeah, so it's semo.com. I can slap a link in the like this, this description or anything so people can check it out as well.


David Brown

Good stuff, and your social channels where can people follow you, please?


Adnan Rahic

Yeah, you can go ahead, go on either Twitter or Twitter is where my go to is. My DMs are open. So if you have any questions there, it's just at Adnan. Adnan, just my first and last name. And you're gonna find me there. You can check me out on linkedin or all of those things. Also Instagram, I have a fire Instagram profile. I do like influencing and I don't really do influencing. I like seven followers and all of those are like my family and friends. I just like to post like workouts and stuff on Instagram. 

Like everybody just unfollows me because I'm so boring with the workouts. So because I go to the gym for like half a year. Yeah. Well, yeah, but do check me out like, I do these things, the community stuff and events and conferences. I used to do a ton of those before, before COVID and, and I'm still up for doing online stuff like videos and podcasts. Like anybody that wants to collab or just have a conversation feel free to reach out and stuff.


David Brown

And thank you very much for joining us today. Talk about serverless computing. And the listeners can follow you as you say on Twitter or linkedin or maybe even Instagram if they're brave. Yeah.


Kevin Montalbo

All right. That's a wrap for this round of cocktails to our listeners. What did you think of this podcast episode? Let us know in the comments section from the podcast platform you're listening to. Also, please visit our website at www.torocloud.com for a transcript of this episode as well as our blogs and our products. We're also on social media, Facebook, LinkedIn, Youtube, Twitter and Instagram. Talk to us there because we listen, just look for Toro Cloud again. Thank you very much for listening to us today. On behalf of the entire team here at Toro Cloud, this has been Kevin Montalbo for coding over cocktails. Cheers.


Listen on your favourite platform


Other podcasts you might like