How can I keep Google Cloud Functions warm?









up vote
2
down vote

favorite












I know this misses the point of using Cloud Functions in the first place, but in my specific case, I'm using Cloud Functions because it's the only way I can bridge Next.js with Firebase Hosting. I don't need to make it cost-efficient, etc.



With that said, the cold boot times for Cloud Functions are simply unbearable and not production-ready, averaging around 10 to 15 SECONDS (WTF?) for my boilerplate.



I've certainly watched this video by Google (https://www.youtube.com/watch?v=IOXrwFqR6kY) that talks about how to reduce cold boot time. In a nutshell: 1) Trim dependencies, 2) Trial & error for dependencies' versions for cache on Google's network, 3) Lazy loading.



But hey, 1) there are only so many dependencies I can trim. 2) What a really useless advice! How would I know which version is more cached? 3) There are only so many dependencies I can lazy load.



Another way is to avoid the cold boot all together. What's a good way or hack that I can essentially keep my (one and only) cloud function warm?










share|improve this question



























    up vote
    2
    down vote

    favorite












    I know this misses the point of using Cloud Functions in the first place, but in my specific case, I'm using Cloud Functions because it's the only way I can bridge Next.js with Firebase Hosting. I don't need to make it cost-efficient, etc.



    With that said, the cold boot times for Cloud Functions are simply unbearable and not production-ready, averaging around 10 to 15 SECONDS (WTF?) for my boilerplate.



    I've certainly watched this video by Google (https://www.youtube.com/watch?v=IOXrwFqR6kY) that talks about how to reduce cold boot time. In a nutshell: 1) Trim dependencies, 2) Trial & error for dependencies' versions for cache on Google's network, 3) Lazy loading.



    But hey, 1) there are only so many dependencies I can trim. 2) What a really useless advice! How would I know which version is more cached? 3) There are only so many dependencies I can lazy load.



    Another way is to avoid the cold boot all together. What's a good way or hack that I can essentially keep my (one and only) cloud function warm?










    share|improve this question

























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      I know this misses the point of using Cloud Functions in the first place, but in my specific case, I'm using Cloud Functions because it's the only way I can bridge Next.js with Firebase Hosting. I don't need to make it cost-efficient, etc.



      With that said, the cold boot times for Cloud Functions are simply unbearable and not production-ready, averaging around 10 to 15 SECONDS (WTF?) for my boilerplate.



      I've certainly watched this video by Google (https://www.youtube.com/watch?v=IOXrwFqR6kY) that talks about how to reduce cold boot time. In a nutshell: 1) Trim dependencies, 2) Trial & error for dependencies' versions for cache on Google's network, 3) Lazy loading.



      But hey, 1) there are only so many dependencies I can trim. 2) What a really useless advice! How would I know which version is more cached? 3) There are only so many dependencies I can lazy load.



      Another way is to avoid the cold boot all together. What's a good way or hack that I can essentially keep my (one and only) cloud function warm?










      share|improve this question















      I know this misses the point of using Cloud Functions in the first place, but in my specific case, I'm using Cloud Functions because it's the only way I can bridge Next.js with Firebase Hosting. I don't need to make it cost-efficient, etc.



      With that said, the cold boot times for Cloud Functions are simply unbearable and not production-ready, averaging around 10 to 15 SECONDS (WTF?) for my boilerplate.



      I've certainly watched this video by Google (https://www.youtube.com/watch?v=IOXrwFqR6kY) that talks about how to reduce cold boot time. In a nutshell: 1) Trim dependencies, 2) Trial & error for dependencies' versions for cache on Google's network, 3) Lazy loading.



      But hey, 1) there are only so many dependencies I can trim. 2) What a really useless advice! How would I know which version is more cached? 3) There are only so many dependencies I can lazy load.



      Another way is to avoid the cold boot all together. What's a good way or hack that I can essentially keep my (one and only) cloud function warm?







      google-cloud-functions firebase-hosting serverless next.js






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 10 at 9:08

























      asked Aug 10 at 9:00









      harrisonlo

      2118




      2118






















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          4
          down vote



          accepted










          With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.



          There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.



          If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.






          share|improve this answer



























            up vote
            1
            down vote













            You're not the first to ask ;-)



            The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.



            It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.



            You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.






            share|improve this answer




















            • Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
              – harrisonlo
              Aug 12 at 2:12






            • 1




              At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
              – DazWilkin
              Aug 12 at 15:18











            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f51782742%2fhow-can-i-keep-google-cloud-functions-warm%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            4
            down vote



            accepted










            With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.



            There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.



            If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.






            share|improve this answer
























              up vote
              4
              down vote



              accepted










              With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.



              There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.



              If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.






              share|improve this answer






















                up vote
                4
                down vote



                accepted







                up vote
                4
                down vote



                accepted






                With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.



                There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.



                If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.






                share|improve this answer












                With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.



                There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.



                If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Aug 10 at 15:50









                Doug Stevenson

                68.4k880101




                68.4k880101






















                    up vote
                    1
                    down vote













                    You're not the first to ask ;-)



                    The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.



                    It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.



                    You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.






                    share|improve this answer




















                    • Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                      – harrisonlo
                      Aug 12 at 2:12






                    • 1




                      At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                      – DazWilkin
                      Aug 12 at 15:18















                    up vote
                    1
                    down vote













                    You're not the first to ask ;-)



                    The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.



                    It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.



                    You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.






                    share|improve this answer




















                    • Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                      – harrisonlo
                      Aug 12 at 2:12






                    • 1




                      At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                      – DazWilkin
                      Aug 12 at 15:18













                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    You're not the first to ask ;-)



                    The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.



                    It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.



                    You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.






                    share|improve this answer












                    You're not the first to ask ;-)



                    The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.



                    It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.



                    You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Aug 10 at 15:30









                    DazWilkin

                    1,66921525




                    1,66921525











                    • Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                      – harrisonlo
                      Aug 12 at 2:12






                    • 1




                      At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                      – DazWilkin
                      Aug 12 at 15:18

















                    • Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                      – harrisonlo
                      Aug 12 at 2:12






                    • 1




                      At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                      – DazWilkin
                      Aug 12 at 15:18
















                    Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                    – harrisonlo
                    Aug 12 at 2:12




                    Thanks for answering! I think Doug (the other answerer) has a point though, and I quote: "Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost." So pinging it would not make it a good solution. And I tried actually, cold start is still random...
                    – harrisonlo
                    Aug 12 at 2:12




                    1




                    1




                    At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                    – DazWilkin
                    Aug 12 at 15:18





                    At low (ping) volumes, it's unlikely the service will attempt to scale with additional instances. Your question specified that you had no interest in alternative solutions and whether there is a way to keep an instance alive. This is the only solution to that problem currently.
                    – DazWilkin
                    Aug 12 at 15:18


















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f51782742%2fhow-can-i-keep-google-cloud-functions-warm%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    這個網誌中的熱門文章

                    How to read a connectionString WITH PROVIDER in .NET Core?

                    In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

                    Museum of Modern and Contemporary Art of Trento and Rovereto