![]() Today’s post is the final chapter in our three part series on building a deep learning model server REST API: How would you go about shipping your deep learning models to production in these situations, and perhaps most importantly, making it scalable at the same time? A startup that is in “stealth mode” and needs to stress test their service/application in-house.A government organization that needs a private cloud.A project that specifies that the entire infrastructure must reside within the company.An in-house project where you cannot move sensitive data outside your network.This type of situation is more common than you may think. Going with a model deployment service is perfectly fine and acceptable… but what if you wanted to own the entire process and not rely on external services? nearly all of them provide some method to ship your machine learning/deep learning models to production in the cloud. If you don’t believe me, take a second and look at the “tech giants” such as Amazon, Google, Microsoft, etc. Shipping deep learning models to production is a non-trivial task. Click here to download the source code to this post ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |