1. Private ML:
Today, most ML applications rely on the client-server model. Users have to send their data to servers where ML models are running. Developers can run their models on servers and make them available to users via web API’s. The client-server model, in other words, allows developers to use very large neural networks that cannot run on private user devices. However, sometimes this model is unfeasible because of the lack of privacy and it is preferable to code ML on user devices.
The good news is that ML does not always require expensive servers. A lot of models can be compressed to run on user devices and these days even mobile phones come with chips that support local deep learning. The problem, however, is Python. MacOS and a lot of versions of Linux come with Python preinstalled but you still have to install ML libraries separately. In Windows, you have to manually install Python first and then install ML libraries separately. In other words, a lot of user devices are not equipped with Python.
2. Speed & Customization:
Yet another advantage is model customization. When developing a ML mode that adapts to each user, one way is to store one model per user on the server and then train it on the relevant data from each user. This, however, will put an undue load on the server as it grows and sensitive data will have to be in the cloud.
3. Integration in Web & Mobile Apps:
The biggest, and most difficult aspect, of ML is the training part especially in deep learning. It can be done on user devices but takes months when the neural network is big. Python is advantageous in this regard when it comes to client-server models on the server side. It is scalable and speeds up the training process since its load is distributed over server clusters. The end product can be compressed and deployed on user devices easily due to compatibility of ML libraries written in different languages.