Deploy your models with confidence

Understand your models, predict failure modes, and prevent bias with the Leap Interpretability Engine.

Understand what your model has learned

Enable targeted
fine-tuning

Predict behaviour
on unseen data

Efficiently improve your models

Debugging neural networks is a famously dark art – still mostly conducted with trial and error, fiddling with hyperparameters, and throwing more (possibly expensive) data at the problem. Knowing exactly what the model has learned makes it easier to figure out why it’s not performing well and fix it, with data augmentation specifically targeted to address the model's weaknesses.

Predict and prevent mistakes

Typically you’d try to predict the deployment performance of a model by looking at various metrics over its test performance. But, there are no guarantees that the test set contains all possible edge cases or failure modes. We allow you to see exactly what a model has learned, enabling the prevention of embarrassing or dangerous failure modes – even if they're not captured in your test data.

Mitigate bias

As AI use becomes more and more prevalent, it's all too easy for powerful models to encode and reinforce existing biases from the data they're trained on – and often in pernicious ways that are hard for humans to identify from the data alone. Our interpretability can help you catch these biases and fix them before deployment, preventing hidden discrimination.

Vision

Our computer vision interpretability engine is live – try it now!

Language

Research in progress, interpretability engine coming soon.

Tabular

Research in progress, interpretability engine coming soon.

The Interpretability Engine

See what your model has learned.

Learn more

Discover learned features, validate coherence, and identify u biases.

Identify where and why your model is confused.

Learn more

Identify confusion, isolate entangled features, and quantify misclassification risk.

A screenshot of the leap dashboard, showing the entanglements feature

Understand predictions on individual samples.

Learn more

Validate predictive features and understand how your model makes predictions.

Easy integrations

Weights and Biases logoHugging Face logo
We can log results directly to your WandB projects! See docs for more.
Copy
from leap_ie.vision import engine

config = {
    "leap_api_key": "your_leap_api_key",
    "wandb_entity": "your_wandb_entity",
    "wandb_api_key": "your_wandb_api_key",
}

df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)
We can pull your models and metadata directly from huggingface! See docs for more.
Copy
from leap_ie.vision.models import get_model

preprocessing, model, class_list = get_model("microsoft/resnet-50", source="huggingface")

Case Studies

The learned prototype of a tank detection model.

Tank Detection

Leap's Interpretability Engine uncovers a dangerous bias in a tank detection model, despite it outwardly exhibiting high accuracy on test data. We show how prototype generation can also be used during training, to guide effective data augmentation and hyperparameter selection. Using these insights we strategically retrain the model, optimising for robust, generalisable feature learning, to enable safer deployment in high-stakes scenarios.

Learn more

Stay in the leap  loop

Thank you!

Your submission was received and we'll be in touch.
Oops! Something went wrong while submitting the form.
Leap Labs logo