We believe in transparency. Our research code, tools, and datasets are available for the community to use, study, and improve.
Our most impactful open-source projects
Production-ready inference framework with built-in safety guardrails, output filtering, and comprehensive logging for responsible AI deployment.
Interactive cost-quality-speed tradeoff analyzer based on our Tradeoff Selector™ methodology. Helps teams make informed decisions about AI system configurations.
Comprehensive benchmarking toolkit for evaluating fairness and bias in ML models across multiple protected attributes and use cases.
Interpretability toolkit providing SHAP, LIME, attention visualization, and concept-based explanations in a unified, user-friendly API.
Document integrity verification using cryptographic hashing and ML-based tamper detection. Detects manipulated PDFs with 99.7% accuracy.
Calibrated uncertainty estimation for deep learning models using conformal prediction, ensembles, and Bayesian methods.
Research datasets freely available for academic and commercial use
250K annotated examples for bias detection across 7 protected categories in NLP tasks.
50K pristine and 50K tampered documents for training document integrity models.
Evaluation dataset for healthcare AI safety with expert physician annotations.
Multi-domain benchmark for calibration and uncertainty quantification methods.
Join our community of contributors building trustworthy AI
Browse our repositories for issues labeled "good first issue" or "help wanted" to find a task that matches your skills.
Fork the repository, create a feature branch, and implement your changes following our contribution guidelines.
Open a pull request with a clear description. Our maintainers will review and provide feedback within 48 hours.
Whether you're a researcher, engineer, or student, there's a place for you in our open-source community. Let's build responsible AI together.