Fortuna: Comprehensive Python Library for AWS Developers

What is Fortuna

Fortuna is a library designed for uncertainty quantification, a crucial aspect in applications where critical decisions are made. It helps in the proper estimation of predictive uncertainty, allowing users to assess the reliability of model predictions and trigger human intervention when necessary. By providing tools to quantify and manage uncertainty, Fortuna enhances the trustworthiness and accuracy of predictive models, making it a valuable resource for developers and researchers in various fields. Exploring Fortuna can significantly improve the robustness and reliability of your predictive models.

GitHub Stats Value
Stars 888
Forks 46
Language Python
Created 2022-11-17
License Apache License 2.0

Fortuna is a library designed to facilitate uncertainty quantification in machine learning models, particularly useful for applications requiring critical decision-making. It helps assess the reliability of model predictions, trigger human intervention, and determine if a model can be safely deployed.

  • Uncertainty Quantification: Supports calibration and conformal methods for pre-trained models from any framework, and Bayesian inference methods for deep learning models written in Flax.
  • Usage Modes:
    • From Uncertainty Estimates: Minimal compatibility requirements, suitable for quick interaction. Offers conformal prediction methods for classification and regression.
    • From Model Outputs: Calibrate model outputs, estimate uncertainty, compute metrics, and obtain conformal sets. Better control over uncertainty estimates compared to the first mode.
    • From Flax Models: Requires deep learning models in Flax, enabling scalable Bayesian inference procedures to improve predictive uncertainty quantification.
  • Installation: Requires JAX installation. Can be installed via pip or built using Poetry with optional dependencies for transformers, Amazon SageMaker, documentation, and Jupyter notebooks.
  • Examples: Several usage examples are provided in the /examples directory.
  • Amazon SageMaker Integration: A simple pipeline to run Fortuna on Amazon SageMaker with detailed setup instructions.
  • Calibration and Conformal Prediction: Provides rigorous sets of predictions with a user-given level of probability.
  • Bayesian Inference: Enables scalable Bayesian inference for deep learning models to improve uncertainty quantification.
  • Model Output Calibration: Estimates uncertainty and computes metrics from model outputs.
  • Documentation and Support: Comprehensive documentation and examples for quickstart and advanced usage.
  • Material: Includes an AWS launch blog post and an arXiv paper detailing the library.
  • Citing Fortuna: Provides a citation format for academic references.
  • Contributing: Guidelines for contributing to the project.
  • License: Licensed under the Apache-2.0 License.

If you have pre-existing uncertainty estimates, you can use Fortuna to calibrate and obtain conformal prediction intervals.

python

from fortuna.conformal import QuantileConformalRegressor

val_lower_bounds = [1, 2, 3]
val_upper_bounds = [4, 5, 6]
test_lower_bounds = [7, 8, 9]
test_upper_bounds = [10, 11, 12]
val_targets = [13, 14, 15]
error = 0.1

conformal_intervals = QuantileConformalRegressor().conformal_interval(
    val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds,
    test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds,
    val_targets=val_targets, error=error)

If you have model outputs from a trained model, you can use Fortuna to calibrate these outputs and estimate uncertainty.

python

from fortuna.output_calib_model import OutputCalibClassifier

val_outputs = [[0.2, 0.8], [0.4, 0.6]]
val_targets = [0, 1]
test_outputs = [[0.3, 0.7], [0.5, 0.5]]

calib_model = OutputCalibClassifier()
status = calib_model.calibrate(outputs=val_outputs, targets=val_targets)
test_entropies = calib_model.predictive.entropy(outputs=test_outputs)

For deep learning models written in Flax, you can use Fortuna for Bayesian inference to improve uncertainty quantification.

python

from fortuna.data import DataLoader
from fortuna.prob_model import ProbClassifier

## Assuming you have Flax model and data loaders
model = ...  # Your Flax model
train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader)
val_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader)
test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)

prob_model = ProbClassifier(model=model)
status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=val_data_loader)
test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())
  • Uncertainty Quantification: Fortuna simplifies the estimation of predictive uncertainty, crucial for critical decision-making and model reliability assessment.
  • Usage Modes: Offers three modes - from uncertainty estimates, model outputs, and Flax models - to cater to different application needs.
  • Calibration and Inference: Provides calibration and Bayesian inference methods, enhancing the accuracy of uncertainty quantification.
  • Integration: Supports integration with Amazon SageMaker for scalable deployment.
  • Documentation and Examples: Well-documented with examples and a user-friendly interface.
  • Widespread Adoption: Potential for widespread use in industries requiring reliable model predictions, such as healthcare and finance.
  • Advanced Methods: Continued development of advanced Bayesian inference and calibration techniques.
  • Community Engagement: Active community contributions and extensions, particularly through its open-source nature.

For further insights and to explore the project further, check out the original awslabs/fortuna repository.

Content derived from the awslabs/fortuna repository on GitHub. Original materials are licensed under their respective terms.