• MPC language

• Privacy by Architecture

Write the computation. The compiler decides what leaves.

Write the computation. The compiler decides what leaves.

Write the logic you need — analytics, ML, matching, whatever. Declare what the output can reveal. The raw data never moves, never centralizes, and can't be accessed even if someone tries.
# Secret Value Example - Stoffel MPC Program
  # Takes user age and salary as private inputs

  # Calculate eligibility score based on age and salary
  proc calculate_eligibility(age: secret int64, salary: secret int64): secret int64 =
    # Age factor: people between 25-65 get higher scores
    let age_score = age * 2
    
    # Salary factor: higher salary increases eligibility
    let salary_score = salary / 1000
    
    # Combined eligibility score
    let total_score = age_score + salary_score
    return total_score

  # Determine risk category based on inputs
  proc assess_risk_category(age: secret int64, salary: secret int64): secret int64 =
    let base_risk = 100
    let age_adjustment = age / 2
    let salary_adjustment = salary / 10000
    
    let final_risk = base_risk - age_adjustment + salary_adjustment
    return final_risk

  # Main computation function
  proc main() =
    # These would be secret inputs from different parties in real MPC
    let user_age: secret int64 = 35      # Age input
    let salary: secret int64 = 75000     # Salary input
    
    # Perform secure computations
    let eligibility_score = calculate_eligibility(user_age, salary)
    let risk_score = assess_risk_category(user_age, salary)
    
    # Results computed without revealing individual age or salary
    discard eligibility_score
    discard risk_score

Stoffel Lang: What is it?

A language for getting the insights without taking custody of the underlying data. You need the aggregate scores, the match results, the risk signals. You don't need the raw records — and with Stoffel Lang, you never have to touch them.

Type-level guarantees

If it compiles, you didn't accidentally leak something. No code review debates about 'is this safe?' — the compiler already checked.

Normal developer workflow

Local sim. Unit tests. Deterministic builds. This isn't a research project—it's infrastructure that works like infrastructure.

Patterns you can copy

Threshold checks. Overlap detection. Risk scoring. Federated aggregation. The gnarly stuff, already written.

What makes this different from "we promise not to look"?

Answers-only outputs

You get the count. The match status. The risk score. Architectural enforcement, not policy.

No plaintext to expose, retrieve, or breach — anywhere in the stack.

Building blocks that compose

Aggregates. Thresholds. Comparisons. Key operations. Write your logic the way you'd write any function—just with secrets that stay secret.

Compile-time leak prevention

That thing where you accidentally log sensitive data? The build fails instead. Catches it before standup, not after incident reports or breach notifications.

How does this actually work?

  1. Mark what's sensitive

`secret` types for the stuff you don't want in logs, databases, audit trails, or Slack screenshots. `public` for everything else.

  1. Write your computation

Analytics. ML pipelines. Matching logic. Whatever you need. It looks like normal code because it is normal code.

  1. Explicit reveals

Want to output the aggregate count? Fine. The individual records? Compiler says no.

MPC happens under the hood. You get answers. The raw data never decrypts, never centralizes, never becomes your problem — and never becomes your liability in a breach, an audit, or a regulatory review.

© 2025 Stoffel Labs Inc. All rights reserved.

© 2025 Stoffel Labs Inc. All rights reserved.

© 2025 Stoffel Labs Inc. All rights reserved.