Technical Challenges of Model Deployment

Open Data Group July 25, 2018

technical challenges of deployment blog-1Deploying analytic models can be a long, slow moving process with many obstacles along the way. Many models are abandoned before they ever make it into production because of inefficiencies that slow down or halt the entire process. To overcome the challenges of model deployment, we need to identify the problems and learn what causes them. Some of the top technical challenges organizations face when trying to deploy a model into production are:


1. The model is not compatible with the production environment.

The first deployment challenge we will cover is the issue of the model’s compatibility from the creation environment to the production environment. Data scientists today use a variety of different tools to solve for many critical business problems. While a variety of solutions enables the data science team, each new tool and language they use must also be handled by IT to deploy the model.  This often results in models being recoded into different languages as the initial language cannot move into the production environment. This leads to longer cycle times, along with potential inconsistencies in the translated model. While monolithic platforms simplify some of this challenge, and helps IT, it may limit the data science team from adopting certain techniques. It’s a fine line between keeping the process efficient and limiting the data science team on what they can achieve.

Solution: The challenge of model compatibility across the analytic lifecycle can be handled with an agnostic scoring engine. Agnostic scoring engines take models created in any language and deploys them into production without constraint.


2. The model is not portable.

Another challenge of model deployment is lack of portability, whether you’re moving environments during the deployment process, or shifting applications and workloads to the cloud. Often a problem with legacy analytic systems, lack of portability can limit businesses in the deployment of their models. Lacking the capability to easily migrate a software component to another host environment and run it there, organizations can become locked into a particular platform. This, again, can create barriers for data scientists when creating models.

Solution: Containerization technologies, such as Docker, can help solve the application portability challenge. Containerized analytic engines capture all of the environmental dependencies for the analytic workload, providing a portable, lightweight “image” that can be deployed anywhere.


3. The organization has a monolithic architecture.

Since models are constantly evolving, the way we deploy them should be able to evolve, too. Monolithic, locked in platforms often limit what organizations are able to do, or may offer services that they don’t need. Businesses should have the ability to apply the microservices that fit their specific needs, and avoid the ones that don’t. Monolithic architectures also strain companies on the options they have to deploy models. Avoiding a monolithic approach gives organizations more freedom in the models they can put into production, and allows them to interchange applications when necessary.

Solution: Containerization technologies provide a microservices infrastructure, allowing organizations to utilize native microservice software to solve for their changing needs. The microservices architecture also limits any possible service failures to isolated components, and enables an organization to leverage on-demand, distributed nature of modern cloud applications.


4. The model does not scale.

The next challenge of deploying models is making sure that they are able to scale and meet increases in performance and application demand in production. Data used in the analytic creation environment is relatively static and at a manageable scale.  As the model moves forward to production, it is typically exposed to larger volumes of data and data transport modes.  The application and IT team will need several tools to both monitor and solve for the performance and scalability challenges that will show up over time.

Solution: Questions of scalability can be solved by adopting a consistent, microservices based approach to production analytics.  Teams should be able to quickly migrate models from batch to on-demand to streaming via simple configuration changes.  This enables a match between application requirements and analytic execution.  Similarly, teams should have ways to scale compute and memory footprints to support more complex workloads. And finally the production environment should allow for monitoring of all these operational details, to enable informed decisions to meet SLAs. 


Although the challenges of model deployment can seem overwhelming, the good news is that all of these issues can be resolved.  At Open Data Group, we help companies achieve model deployment efficiency and overcome these issues with our containerized analytic engine, FastScore. FastScore solves for all of these common challenges, as well as problems that are specific to certain organizations.  To learn more about FastScore, visit our product page here.

Topics: Predictive Analytics, Docker, agnostic scoring engine, Analytic Deployment, scoring engine, Model Deployment, data science, analytics