Intelligent Contracts
Equivalence Principle

Managing Intelligent Contract Operations with the Equivalence Principle

The Equivalence Principle is a core concept in GenLayer's Intelligent Contract framework. It ensures consistency and reliability when handling non-deterministic operation results, such as responses from Large Language Models or web data retrieval, by establishing a standard for validators to agree on the correctness of these outputs. These functions give users detailed control over how the outputs are validated.

Depending on how you want the validators to work, you can choose from a few options, such as a principle that uses LLMs or one that just uses a strict comparison.

💡

Advanced users may also choose to write their own equivalence principle

The Equivalence Principle involves multiple validators randomly selected to determine whether different outputs from non-deterministic operations can be considered equivalent. One validator acts as the leader, proposing the output, while others validate it and then return it instead of their computation.

Equivalence Principles Options

Validators work to reach a consensus on whether the result set by the leader is acceptable, which might involve direct comparison or qualitative evaluation, depending on the contract's design. If the validators do not reach a consensus due to differing data interpretations or an error in data processing, the transaction will become undetermined.

Comparative Equivalence Principle

In the Comparative Equivalence Principle, the leader and the validators perform identical tasks and then directly compare their respective results with the predefined criteria to ensure consistency and accuracy. This method uses an acceptable margin of error to handle slight variations in results between validators and is suitable for quantifiable outputs. However, computational demands and associated costs increase since multiple validators perform the same tasks as the leader.

gl.eq_principle.prompt_comparative(
    your_non_deterministic_function,
    "The result must not differ by more than 5%"
)

For example, if an intelligent contract is tasked with fetching the follower count of a Twitter account and the Equivalence Principle specifies that follower counts should not differ by more than 5%, validators will compare their results to the leader's result utilizing their own LLMs to ensure they fall within this margin.

Non-Comparative Equivalence Principle

GenLayer SDK provides function gl.eq_principle.prompt_non_comparative for handling most scenarios that require performing subjective NLP tasks

Non-Comparative Equivalence Principle Parameters

The gl.eq_principle.prompt_non_comparative function takes three key parameters that define how validators should evaluate non-deterministic operations:

  1. input (function)

    The input parameter represents the original data or function that needs to be processed by the task. For instance, when building a sentiment analysis contract, the input might be a text description that needs to be classified. The function processes this input before passing it to the validators for evaluation.

  2. task (str)

    The task parameter provides a clear and concise instruction that defines exactly what operation needs to be performed on the input. This string should be specific enough to guide the validators in their evaluation process while remaining concise enough to be easily understood. For example, in a sentiment analysis context, the task might be "Classify the sentiment of this text as positive, negative, or neutral". This instruction serves as a guide for both the leader and validators in processing the input.

  3. criteria (str)

    The criteria parameter defines the specific rules and requirements that validators use to determine if an output is acceptable. This string should contain a comprehensive set of validation parameters that ensure consistency across different validators. While the criteria can be structured in various ways, it typically outlines the expected format of the output and any specific considerations that should be taken into account during validation. For example:

    criteria = """
                Output must be one of: positive, negative, neutral
                Consider context and tone
                Account for sarcasm and idioms
            """

    This criteria helps validators make consistent decisions about whether to accept or reject the leader's proposed output, even when dealing with subjective or non-deterministic results.

Example Usage

class SentimentAnalyzer(gl.Contract):
    @gl.public.write
    def analyze_sentiment(self, text: str) -> str:
        self.sentiment = gl.eq_principle.prompt_non_comparative(
            input=text,
            task="Classify the sentiment of this text as positive, negative, or neutral",
            criteria="""
                Output must be one of: positive, negative, neutral
                Consider context and tone
                Account for sarcasm and idioms
            """
        )

In this example:

  • input is the text to analyze
  • task defines what operation to perform
  • criteria ensures consistent validation across validators without requiring exact output matching

Strict Equivalence Principle

Method that requires exact matches between validator outputs. Use this when you need deterministic results or when dealing with objective, factual data.

When to use:

  • Fetching specific data that should be identical (web content, API responses)
  • Boolean operations or exact classifications
  • When network consensus on exact values is critical

How it works:

  • All validators execute the same function
  • Results must match exactly
def fetch_content():
    response = gl.nondet.web.get("https://example.com")
    body = json.loads(response.body)
    return body["valid"]
 
self.valid = gl.eq_principle_strict_eq(fetch_content)
def check_condition():
    web_data = gl.get_webpage(url)
    prompt = f"Is this text positive? Answer only 'true' or 'false': {web_data}"
    return gl.exec_prompt(prompt).strip()
 
result = gl.eq_principle_strict_eq(check_condition)

Custom Leader/Validator Pattern

For advanced use cases, you can implement your own equivalence logic using the leader/validator pattern. GenLayer provides two variants with different error handling approaches.

Safe Custom Pattern (gl.vm.run_nondet)

Provides comprehensive error handling by running validators in a sandbox.

When to use:

  • Production environments where stability is critical
  • Complex validation logic that might have errors
  • When you want automatic error comparison and fallback handling

How it works:

  • Leader function executes the main operation
  • Validator function runs in a sandbox for safety
  • Automatic error handling with customizable comparison functions
def analyze_article_quality(self, assignment_id: str, article_text: str):
    assignment = self.assignments[assignment_id]
    prompt = f"""
    Analyze this article's quality and provide a score from 0 to 2.
    Article: {article_text}
    Guidelines: {assignment.title} - {assignment.description}
    Return JSON: {{"analysis": "detailed explanation", "score": 0-2}}
    """
 
    def leader_fn():
        response = gl.nondet.exec_prompt(prompt)
        return json.loads(response)
 
    def validator_fn(leader_res: gl.vm.Result) -> bool:
        if not isinstance(leader_res, gl.vm.Return):
            return False
        
        validator_res = leader_fn()
        leader_score = leader_res.calldata["score"]
        validator_score = validator_res["score"]
        
        # Exact match for score 0, ±1 tolerance for others
        if validator_score == 0 or leader_score == 0:
            return validator_score == leader_score
        return abs(validator_score - leader_score) <= 1
 
    return gl.vm.run_nondet(leader_fn, validator_fn)

Fast Custom Pattern (gl.vm.run_nondet_unsafe)

The most generic API for non-deterministic execution. Does not use extra sandbox for catching validator errors - validator errors result in Disagree error in executor (same as if the function returned False).

When to use:

  • Performance-critical applications where sandbox overhead matters
  • Simple validators that are unlikely to error
  • When you want direct control over error handling

Note: Use run_nondet instead if you want to catch and inspect validator errors, or use sandbox inside of it.

def analyze_sentiment_fast(self, text: str):
    def leader_fn():
        prompt = f"Rate sentiment 1-10: {text}"
        result = gl.nondet.exec_prompt(prompt)
        return int(result.strip())
    
    def validator_fn(leader_result: gl.vm.Result) -> bool:
        if not isinstance(leader_result, gl.vm.Return):
            return False
        
        score = leader_result.calldata
        return isinstance(score, int) and 1 <= score <= 10
    
    return gl.vm.run_nondet_unsafe(leader_fn, validator_fn)

Advanced Error Handling with run_nondet

# Custom error comparison functions
def custom_validation_with_error_handling(self, data: str):
    def leader_fn():
        # Note: process_external_data() is not provided by GenVM - implement your own logic
        return process_external_data(data)
    
    def validator_fn(leader_result):
        if not isinstance(leader_result, gl.vm.Return):
            return False
        # Note: validate_result() is not provided by GenVM - implement your own logic
        return validate_result(leader_result.calldata)
    
    def compare_user_errors(error1: gl.vm.UserError, error2: gl.vm.UserError) -> bool:
        # Custom logic for comparing user errors
        return "timeout" in error1.message and "timeout" in error2.message
    
    def compare_vm_errors(error1: gl.vm.VMError, error2: gl.vm.VMError) -> bool:
        # Custom logic for comparing VM errors
        return "memory" in error1.message and "memory" in error2.message
    
    return gl.vm.run_nondet(
        leader_fn, 
        validator_fn,
        compare_user_errors=compare_user_errors,
        compare_vm_errors=compare_vm_errors
    )

Data Flow

The Leader/Validator Pattern

Behind the scenes, GenLayer's Equivalence Principle is implemented using a leader/validator pattern. This pattern ensures security and consensus when dealing with non-deterministic operations.

Each nondeterministic block consists of two functions:

Leader Function

  • Executes only on the designated leader node
  • Performs operations like web requests or NLP
  • Returns a result that will be shared with validator nodes
def leader() -> T:
    # Performs the actual nondeterministic operation
    pass

Validator Function

  • Executes on multiple validator nodes
  • Receives the leader's result as input
  • Must independently verify the result's validity
  • Returns True to accept or False to reject
def validator(leader_result: gl.vm.Return[T] | gl.vm.VMError | gl.vm.UserError) -> bool:
    # Verifies the leader's result
    # Returns True if acceptable, False otherwise
    # Note: leader function could end with a UserError or VMError
    # reasons for them can be "host unreachable" and "OOM", respectively
    pass

Writing Secure Validator Functions

❌ Bad Example

def validator(leader_result):
    return True  # Always accepts - insecure!

This validator is useless because it allows the leader to return any arbitrary data without verification. This would allow a single node to produce a rigged result, such as a false match result in football prediction market contract example

✅ Good Examples

Independent Verification:

def validator(leader_result):
    # Independently fetch the same data
    my_data = fetch_external_data()
 
    # If we reached this line, we haven't encountered an error --- vote disagree
    if not isinstance(leader_result, Return):
        return False
 
    # Verify the leader's result matches within tolerance
    return calculate_similarity(leader_result.data, my_data) > 0.9

NLP Validation:

def validator(leader_result):
    # Fetch original content independently
    original_text = fetch_webpage(url)
 
    # Use NLP to verify the summary quality
    if not isinstance(leader_result, Return):
        return False
 
    return is_valid_summary(original_text, leader_result.summary)

Key Principles for Custom Validators

  1. Independent Verification: Validators should independently verify results, not blindly trust the leader
  2. Tolerance for Nondeterminism: When dealing with AI outputs or time-sensitive data, allow reasonable variations:
    • Use similarity thresholds instead of exact matches
    • Account for timing differences in data fetches
    • Accept semantically equivalent AI outputs
  3. Error Handling: Always check if the leader result is an error before processing. gl.vm.run_nondet provides fallbacks for validator functions errors, while gl.vm.run_nondet_unsafe does not. If you use the later you may wish to execute most part of the code in the sandbox and compare errors with custom advanced logic
  4. Security First: The validator's role is to prevent malicious or incorrect data from being accepted. When in doubt, reject

Best Practices

  • Keep non-deterministic operations well-defined
  • Design validators that can handle slight variations in data
  • Consider network delays and timing differences
  • Use NLP for subjective tasks validation
  • Prefer to use built-in equivalence principle templates, as they can be customized by node and have a better security against prompt attacks
  • Document expected tolerances and validation criteria
  • Test validator functions with various edge cases and malicious inputs