Nesa has pioneered the AI Terminal (AIT) the world's first end-to-end execution interface and machine network for running inference queries on-chain.
Special nodes in the network enhanced with attestation TEEs for secret share distribution.
SMPC for privacy-preserving computation, with ZKP schemes for verifiability.
A standardized and secure execution environment for running AI inference on containerized models.
Inference Request Queueing for high throughput and low latency execution.
A fair and secure methodology for inference committee selection using VRF and an organizational structure of duties.
Flexibility in how inference results are aggregated for final output.
We containerize AI models and query templates on-chain, host an ecosystem of augmentative off-chain services from vector storage to RAG, and supply a network of NES miners for AI inference execution. Miners pull containers, run them locally on decentralized TEE compute provided by Nesa, and instantly reach validation consensus, reporting the query response on chain in a fully privacy-preserving transaction using ZK proof.
Model consistency and inference reliability are the linchpins of the AIT. To achieve uniform model execution, the AIT configuration charters every aspect that could influence the computational outcome, including OS, compile versions, hardware specs, precise compilation options, and flags. By rigorously defining the execution environment, Nesa eliminates variability across AI stacks.
Trust in inference is another keystone to the AIT network. To achieve this, the execution protocol within the AIT prescribes a series of steps that every node must follow. This protocol includes initialization procedures, data input conventions, model execution, and output handling. By standardizing the execution flow, the network reliably predicts and replicates the behavior of AI models in realtime.
A pioneering feature within the AIT network is Nesa's hybrid enhanced privacy system. This framework leverages a two-phase transaction structure, utilizing the commit-reveal paradigm, to safeguard against dishonest behavior and free-riding. This ensures that nodes are incentivized to perform their computations honestly and that users can trust the integrity of the inference results.
Many AI models introduce randomness during inference, which can pose a challenge for achieving deterministic and reproducible results. The AIT network mitigates this by fixing the random seed, ensuring that any pseudo-random number generation during inference leads to the same sequence across all executions. In scenarios where public randomness is necessary, we integrate Verifiable Random Functions (VRFs) that produce randomness that is both unpredictable and provably unbiased.
Before an AIT kernel is approved for storage on the blockchain, it undergoes rigorous validation to ensure compliance with the specified config template and to confirm that it yields consistent results across diverse environments. A suite of tests is run in simulated multi-node scenarios by a Neural Arbiter Network (NAN) to affirm that the kernel’s execution is deterministic and immune to variances in the underlying systems.
The AIT architecture facilitates uniform execution across all nodes on the Nesa network, organizing containerized models that can be seamlessly ran by miners executing AI inference jobs.
The weights and biases that define the AI model hosted on-chain. These parameters containerized within the execution environment are the product of the training process, and they dictate the model’s behavior, purpose, and capabilities.
Functionally similar to a Dockerfile, this file contains the specifications for the virtual environment in which the model will execute. This includes dependencies, libraries, and runtime needed needed on every node to execute with identical configurations.
The logic for processing inputs and generating predictions or outputs, along with compilation information. These verification scripts determine how the system aggregates and reaches consensus from results returned from different nodes.
Start building on Nesa by uploading your model container and bringing a node online.