You have the data, but don't know how to start dealing with the analytical problem or need a baseline model? We've created solutions for industry use cases, like demand forecasting and fraud detection and called them "templates" "template" pipeline is a sequence of bricks needed to get from the data (we've included samples for your convenience!) to the solution, like ML model and inventory forecasting dashboard. It consists of renaming, missing values treatment, aggregation, modeling or prediction bricks. You need to perform a very low amount of operations, to adopt these templates for your datasets and formats. Just connect your data, do minor tweaks to adopt it to the existing pipeline, click on "run" - and you have a fully functioning end-to-end model for your problem, the model could be deployed or dashboard published in a few clicks! Moreover, you can create your template pipelines to share with other power users or viewers.

Enjoy zero- or low-code functionality to do data pre-processing within the platform! We have a lot of bricks to deal with the data through a convenient drag-n-drop user interface. Automated or user-selecter missing values treatment, one-hot encoding, new features creation, filtering, joining and many more operations are on your disposal. Just drag the operation that you need on the left menu and drop it to the main screen, connect with a previous brick output via graphical interface, click on "run" - and enjoy the processed dataset, which could be previewed on each step. Our system will give you a warning or intelligent suggestion if anything goes wrong with any of the operations!

Experimenting: ML or statistical modeling, exploratory analysis

Pipeline creation and model evaluation, selection

Drag-n-drop the bricks from data processing, ML/modeling, exporting and visualization/dashboard groups to build a sequence of actions with data without coding. All the phases of a machine learning project are available for you - data upload, cleansing/pre-prosessing, experimenting&modeling, deployment and post-processing, application as a resulting APIs or dashboards creation and export. You can create machine learning models, compare them via plots and relevant statistics, reproduce processing or modeling by creating a pipeline for these purposes.

Collaboration with other power users

The typical problem of Machine learning toolset is a lack of collaborative options and work. Here at you receive it out-of-the-box. Create models or processing pipelines, select part of the pipeline - just save and share them. Re-use common templates in organization to deal with the same problem.

Creating dashboards and APIs (automated) for results sharing and validation

Storytelling is an inevitable part of any ML project. Now it's easy with Models are deployed in clicks, API is created automatically. Use pre-built or create your own dashboards without coding - share the graphs, summary or data reports with other power users or Viewers (managers, C-level, analysts, marketing team) - whoever requires these! It can be done by sharing the link (just like google docs!) or creating a schedule for pipeline re-run and reports delivery.

Models' performance and versioning

Evaluate the model quality and performance automatically for different datasets. Different statistics and visualization are developed for your disposal, like R2 or error matrix, ROC curve. Deploy and compare the different versions of the same model by using deployment manager.






Adopting template pipelines for organization's data and needs


Data preparation: cleansing and transformation, aggregation and joining



If you have any question about how to use the product - read our Wiki or email us


Be the first among the innovators!
Receive 1 year special price offer for early bird registration!
Try beta - leave feedback - take your discount.
Be the first among the innovators!
Receive 1 year special price offer for early bird registration!
Try beta - leave feedback - take your discount.