Development

Building on Nesa Protocol

Nesa is for any application or service that wants to use AI. You can integrate your product with Nesa in less than a minute to have access to over 3,000 AI models.

Explore Integrations
Building on Nesa Protocol
Integrate your smart contract with AI in less than a minute.
Left object
Right object

USE YOUR OWN CUSTOM AI MODEL for the first time ON CHAIN

Public Model
Gain exposure from the Nesa community.
Monetize your model and get paid when it is used on Nesa.
Private Model
Use your own proprietary model at a fraction of the cost.
No one else can see or use your model. You designate access.

Latest Video
Models on Nesa

Nesa is partnered with developers around the world, building and deploying AI models across a spectrum of modalities and use cases. Here are two recent video model uploads.
01
Open Sora

Open Sora

Video AI Model

Open-Sora is an initiative dedicated to efficiently producing high-quality video. By embracing open-source principles, Open-Sora provides access to advanced video generation techniques while offering a streamlined and user-friendly platform that simplifies the complexities of video generation.

02
Open Sora Plan

Open Sora Plan

Video AI Model

Open-Sora-Plan aims to create a simple and scalable repo, to reproduce Sora (OpenAI, but we prefer to call it "ClosedAI" ). The current code supports complete training and inference on Huawei Ascend computing system and can output video quality comparable to the highest industry standards.

Building an App on Nesa

Nesa’s Developer Suite offers a variety of features that make building an AI-powered application breathtakingly simple.
Upload Pipeline
Upload Pipeline
Dedicated pipeline for model parameter submission, AIVM config files, and inference code with documentation.
Containerization
Containerization
DNA containerization of the model, query template, qccess to external data store, AIVM file, and config parameters, and inferecne code.
Compute & Storage
Compute & Storage
GPU and TPU sharing with non-deterministic hardware instructions. Storage of AIVM kernels on IPFS and Arweave.
Evolution CI
Evolution CI
End-to-end pipeline for model updates on-chain and version control for AIVM kernels, facilitating evolution while preserving lineage.
Directory
Directory
Repository with dev interface for model metadata including authorship, version history and performance benchmarks.
Deployment
Deployment
Model selection from repos for initiation of inference tasks, handling data input, execution parameter config, and request submittal.
Security Suite
Security Suite
Tools for managing encryption keys and access controls to specify which nodes are authorized to execute models.
Monitoring & Analytics
Monitoring & Analytics
Real-time monitoring of model performance and statistics tracking the usage activity and patterns of deployed models.
01
Interoperable

Interoperable

Nesa's AI Link™ serves as the protocol enabling the AIT to interoperate with different blockchain networks for model, data, parameter, and computational task transfer cross-chain.

learn more
02
Minimal Code

Minimal Code

A minimal code interface for developers looking for factory inference query settings and a turnkey deployment setup on their small models or for their models that are slower to evolve.

learn more
03
Customizable

Customizable

A richly customizable AIVM Execution process to prescribe steps every node must follow, including initialization procedure, data input convention, model execution, and output handling.

learn more

Frequently asked questions

01How does nesa scale?
Nesa scales large model inference by facilitating it off chain, so that the system can benefit from the efficiency of nodes utilizing advanced hardware and various software optimization techniques not possible in smart contracts. This off-chain inference process is secured by our proprietary cryptography solution of the commit and reveal paradigm, and is further managed through dynamic hardware spec mandates for trusted nodes in the system. This safely increases Nesa's query throughput as more transaction requests are made and more nodes join the network.
02What is Nesa's architecture?
Nesa is designed to deconstruct the model evolution and querying process and compartmentalize each component. Information and instructions integral to the inference query are containerized on-chain, heavy datastores are bridged off-chain, AI models are hosted in decentralized storage, and query responses are reported back to the chain from trusted nodes via Nesa oracles posting transaction data.
03What programming languages and VMs are supported by Nesa?
Because of Nesa’s modular architecture, it can support any programming language or VM. Currently supported languages include Solidity (EVM), Rust & Golang (Cosmos SDK).
04How do I run a node on Nesa?
Nesa supports multiple testnets that users can run nodes on in preparation for mainnet. Information on running testnet nodes is available in our docs.
05Where can developers get started?
Developers can head to the docs to get started with building on Nesa.
06How are queries paid for?
Nesa is designed to have a token used to secure the network via Proof of Stake and to pay for transaction fees on the network for inference queries, called PayForQuery transactions.

Build on Nesa as a developer, stake NES to mine.

Start building on Nesa by uploading your model container and bringing a node online.