-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is a good basic deployment workflow on Linux? #60
Comments
Summary:
I will first try to answer the first block of bullets (the more modest requirements), and talk about the what-if points one after another below that. We assume that you have ssh/scp access to the Linux server, user(s) with suitable privileges to deploy and also a user to run it. We also assume that you have tested your model and API on your local development machine, and that your dev/build environment has a python version that is at least compatible with the python version in production. When creating environments, you can tell your environment which Python version to use. For example, a way to create such a development/build environment on conda could be
If you have not done so before and this is your first release, prepare your first freeze your dependencies. That way, the complete dependency tree is available, with versions, and you will produce a defined, reproducible artifact. You fist need to create a pristine environment with a production-compatible Python version, then install the requirements, and then freeze them. Use these commands (in your project directory): > cd myprojectdir &:: Change into project directory (if necessary)
> .mydevenv\Scripts\activate &:: Activate dev environment (if necessary; if using conda, use `conda activate mydevenv`)
> python --version &:: Validate that the dev/build environment's Python is compatible with that of production
Python 3.6.10
> python -m venv .venv_temp_deploy
> deactivate &:: (if you activated your dev environment above)
> .venv_temp_deploy\Scripts\activate
> pip install -r requirements.txt
> pip freeze --local > requirements_frozen.txt
> deactivate
> rmdir /s /q .venv_temp_deploy Important: Freezing requirements is usually only done once. From now on, you shall have to maintain your Steps to prepare the artifact to deploy:
Transfer your code, config, wheels as well as any other local resources your API needs to the server. Steps to install on the server
How to run with waitress/gunicorn/uwsgi/nginx can be found in https://flask.palletsprojects.com/en/1.1.x/deploying/ and those specific tools' docs, including the This should at least answer the first block of bullet points above. PS: Here's a nice rundown of more advanced Python deployment approaches: https://www.nylas.com/blog/packaging-deploying-python/ |
At least some short answers on the second block of bulleted questions:
Just install it, e.g. somewhere in /opt, and use the python interpreter there to create your app's venvs.
If they are wheels, and could be pip-downloaded for your target platform as described in my earlier answer, you're fine. Anyway, it's a good idea to install python-dev on your server so that some (basic) compiling can take place. So for maybe 95% of the cases, this should work. Some C-backed source distributions can then be compiled already, but others require additional dependencies specifically for compiling which the download/deploy mechanism above does not cover (shakes fist at
You need to install them (not tensorflow BTW) using your Linux distro's package management system or similar. Or use Docker. 😬 |
I've been asked for a step-for-step guide on deploying a model in this example situation:
To make the question a little more interesting:
The idea of this question to give some more guidance beyond what can be found in the deployment section of the docs, and the general recommendation to just "let your (dev)ops experts do their thing." 😉
The text was updated successfully, but these errors were encountered: