Software engineers are always looking for new, fast ways to update their models once deployed into production. Whether this involves running a new system, creating a new code, or utilizing a new software, programmers need to find fast and accurate ways to update their models that are already in production in order to maintain efficiency.
Cloud computing has forever changed the way businesses have been storing and managing their data. Within the past few years, many companies have made the switch from storing their data within a service oriented architecture (SOA) to the cloud.
Imagine tracking data for multiple models by hand. How long would this process take you? Hours? Days? This question mainly depends on how many models you need to track and how much information there needs to be maintained within the models.
Install Docker and FastScore in 5 minutes!
Want to install FastScore but not sure how to get it up and running? Watch this 5 minute instructional video with George Kharchenko from our data science team and walk through how to install both Docker and FastScore into your blank system. The video will lead you through what prerequisites you will need, as well as how to configure the FastScore fleet, and more.
- How to install python and set-up tools
- Installing Docker and FastScore CLI
- Launch model manage and install the FastScore Fleet
Docker Containers allow for easy install and set-up of FastScore. Once installed you can view the dashboard and start scoring models in minutes.
Programming has redefined itself over the years from simply writing code to now finding effective solutions to any problem related to software development, algorithm, analytics, etc. These solutions have required the help of various software tools that are not always compatible with one another.
run any model anytime regardless of its native data science language
Did you know FastScore, our agnostic analytic deployment engine, can run any model, any time, regardless of its native data science language? Watch this 4 minute video, and see a gradient boosting machine model built in python, and the same model built in R, deployed to an AWS instance with three easy steps. With the right abstractions, and leveraging microservices, you can easily deploy a model simply by:
- Loading models in any language into the scoring engine.
- Selecting an input stream that delivers data into the model.
- Selecting an output stream for where the data goes after scoring has been completed
Supported through both the FastScore dashboard and the command line, you are now able to load and started scoring models in minutes.
Maximize flexibility to design models and ensure they are ready for deployment.
We are excited to make placing models into FastScore simple. Watch a step-by-step demo of our new Jupyter integration with Matthew Mahowald, Product Manager/Data Scientist.
A simple restful API with Jupyter allows you to verify how models are behaving before they are uploaded into FastScore engines. Prepare and upload models, validate data schemas, identify potential production failures and errors, leverage your full data science stack, including libraries like Pandas and data.table, as well as validate, score and gain feedback. Watch and get answers to our most frequently asked questions, and more.
- What languages does the Jupyter platform support for FastScore?
- Can I check and validate my models before uploading them to FastScore engines?
- How can I ensure my model deploys before I hand it to the production team?
Jupyter integration allows the data science team maximum flexibility to design their models in familiar environments while simultaneously ensuring they are ready for deployment. With Jupyter and FastScore you can test locally, and deploy globally.
Simultaneous analytic iteration and deployment with FastScore.
In our first post in a series of video blogs, listen in as George from our engineering staff takes Brooke from our customer team through a demo of FastScore and creates an Analytic Operation Center. In the demo, you will see two gradient boosting machine models deployed and scored in real time. Both model instances are deployed in FastScore, then the two model inputs and outputs are combined in a dashboard using Grafana - and we can start to monitor the analytics scoring as well as some key performance metrics of the deployment. Watch as they discuss several interesting concepts including:
- How can you quickly change models in production from Python to R?
- What happens to the compute resources when I change model languages?
- How can I leverage more analytic engines to increase scoring rates?
- Are there differences in running models in Azure vs AWS?
Centralized deployment, iteration and monitoring of analytics enables an Analytic Operation Center for the business - a single place to understand, manage and extract value from the data science investment.
ODG is happy to announce our sponsorship and participation in two ASA Datafest events this spring. ODG will work with the Ohio State University in Columbus, OH and Loyola University in Chicago, IL to help students get the most from their weekends. Our relationship with Datafest started last year, where we both mentored and judged the 2016 event at Loyola University. It was a blast, and we were happy to see students really digging into the problem and finding unique solutions.
This is part 1 in a multi-part series discussing an approach to effective deployment of analytic models at scale.
It’s 2017. Your organization has been collecting valuable data for several years. The organization you work for is somewhere on the spectrum of analytic maturity from “we just hired our first data scientist” to “we are in the credit scoring business and have been developing critical analytics for decades”.