Rapid Incremental Adoption
FastScore slots into your existing CI/CD tooling, allowing rapid adoption by DevOps teams.
FastScore gives the enterprise a modern, microservices based approach for machine learning and AI model operationalization. FastScore is architected as a suite of microservice modules based on Docker. Each is an optional, but powerful, service used to connect the critical pieces of the analytics workflow: data science models, data sources, and applications. The underlying philosophy of FastScore is integration: provide unique value where appropriate, and leverage existing technology when available. Read on to learn more about each module, and the value they bring to operationalizing models.
Engine represents the fundamental unit of execution for your models. With support for all the major data science languages and packages, think of engine as a universal production container, whose job is to ingest and operate any model. The FastScore Engine provides a single unified approach for operationalizing models - in dev, test and production scenarios. Data Science teams can use Engine to validate their models for production, prior to promotion downstream. IT teams operate models as fleets of Engines, each as a modern microservice, and can rest assured the critical math payload will execute flawlessly. Business teams receive the critical model outputs from Engine to any application, in any mode of operation. Learn more about Engine here.
Manage is a supporting microservice for the FastScore Engine. Manage provides the engine with access to name-based execution assets, including models, data schemas and data stream descriptors. Out of the box, Manage is a stand-alone tool, and can manage assets for Engines without further integrations. However, the most common pattern in our customers is to connect FastScore Manage to an enterprise source code management tool like Git or Bitbucket. With this integration, FastScore leverages existing repositories and processes in the enterprise and allows for natural and rapid adoption. Learn more about Manage here.
Deploy is a supporting microservice for the FastScore Engine. Deploy provides data science teams native integrations to many of their favorite workbenches and model creation tools like Jupyter. With Deploy it’s simple to create and test model assets for production, before the model leaves the Data Scientists’ desk. Deploy also connects with the Manage microservice, allowing model assets to be pushed into common repositories for use by others and downstream processes. Learn more about Deploy here.
Composer is a supporting microservice for the FastScore Engine. Composer provides a no-code GUI approach to build workflows of models, even if they are written in different languages. Composer creates automated deployment pipelines for each machine learning model in your workflow. Composer then automatically builds entire fleets of Engines, establishes configurations, and connects data transport layers - creating an operational workflow as specified by the user.
Lineage is designed to provide a comprehensive historical view of an analytic model’s critical assets, and update in real-time as the model progresses through its lifecycle and systems. Lineage gives the enterprise user the ability to quickly audit any model in minutes by providing REST APIs for accessing information about specific model events and assets, as well as track various pieces of metadata. By identifying individual metadata and elements, lineage is able to group this information together to produce pathways that show the complete model life cycle. Users are also able to push lineage into common repositories such as ArrangoDB or similar databases. Learn more about Lineage here.
Compare is a supporting microservice for the FastScore Engine. Composer generates statistics between two streams of data; one a “truth stream” and a second stream under test. We created Compare as a general statistical comparison tool, learning from our customers we could not pre-guess the right metrics for a specific use case. Compare allows users to quickly generate customized or standard methods for comparing performance between two, or many, machine learning models. Learn more about Compare here.