Original title: Biteye & PANews jointly release AI Layer 1 research report: Finding fertile ground for DeAI on the chain
Original author: @anci_hu49074 (Biteye), @Jesse_meta (Biteye), @lviswang (Biteye), @0xjacobzhao (Biteye), @bz1022911 (PANews)
Overview
background
In recent years, OpenAI, Anthropic, Google, Meta and other leading technology companies have continuously promoted the rapid development of large language models (LLM). LLM has demonstrated unprecedented capabilities in various industries, greatly expanding the imagination of human beings, and even showing the potential to replace human labor in some scenarios. However, the core of these technologies is firmly in the hands of a few centralized technology giants. With strong capital and control over expensive computing resources, these companies have established insurmountable barriers, making it difficult for most developers and innovation teams to compete with them.
Source: BONDAI Trend Analysis Report
At the same time, in the early stages of AI's rapid evolution, public opinion often focuses on the breakthroughs and conveniences brought by technology, while relatively insufficient attention is paid to core issues such as privacy protection, transparency, and security. In the long run, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If not properly resolved, the controversy over whether AI is "for good" or "for evil" will become increasingly prominent, and centralized giants, driven by their profit-seeking instincts, often lack sufficient motivation to proactively respond to these challenges.
Blockchain technology, with its decentralized, transparent and censorship-resistant characteristics, provides new possibilities for the sustainable development of the AI industry. At present, many "Web3 AI" applications have emerged on mainstream blockchains such as Solana and Base. However, in-depth analysis shows that these projects still have many problems: on the one hand, the degree of decentralization is limited, key links and infrastructure still rely on centralized cloud services, and the meme attributes are too heavy to support a truly open ecosystem; on the other hand, compared with AI products in the Web2 world, on-chain AI is still limited in terms of model capabilities, data utilization and application scenarios, and the depth and breadth of innovation need to be improved.
To truly realize the vision of decentralized AI, enable blockchain to safely, efficiently, and democratically carry large-scale AI applications, and compete with centralized solutions in terms of performance, we need to design a Layer 1 blockchain tailored for AI. This will provide a solid foundation for open innovation, democratic governance, and data security in AI, and promote the prosperity and development of the decentralized AI ecosystem.
Core Features of AI Layer 1
As a blockchain tailored for AI applications, AI Layer 1's underlying architecture and performance design are closely centered around the needs of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should have the following core capabilities:
Efficient incentive and decentralized consensus mechanism
The core of AI Layer 1 is to build an open shared network of computing power, storage and other resources. Unlike traditional blockchain nodes that mainly focus on bookkeeping, AI Layer 1 nodes need to undertake more complex tasks. They not only need to provide computing power and complete the training and reasoning of AI models, but also need to contribute diversified resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants on AI infrastructure. This puts higher demands on the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately evaluate, incentivize and verify the actual contributions of nodes in AI reasoning, training and other tasks, and achieve network security and efficient allocation of resources. Only in this way can the stability and prosperity of the network be guaranteed, and the overall computing power cost can be effectively reduced.
Excellent high performance and heterogeneous task support capabilities
AI tasks, especially LLM training and reasoning, place extremely high demands on computing performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including different model structures, data processing, reasoning, storage and other scenarios. AI Layer 1 must be deeply optimized on the underlying architecture for high throughput, low latency and elastic parallelism, and preset native support capabilities for heterogeneous computing resources to ensure that various AI tasks can run efficiently and achieve smooth expansion from "single-type tasks" to "complex and diverse ecosystems."
Verifiability and trusted output assurance
AI Layer 1 must not only prevent security risks such as model abuse and data tampering, but also ensure the verifiability and alignment of AI output results from the underlying mechanism. By integrating cutting-edge technologies such as trusted execution environment (TEE), zero-knowledge proof (ZK), and multi-party secure computing (MPC), the platform can ensure that each model reasoning, training, and data processing process can be independently verified to ensure the fairness and transparency of the AI system. At the same time, this verifiability can also help users clarify the logic and basis of AI output, achieve "what you get is what you want", and enhance users' trust and satisfaction with AI products.
Data privacy protection
AI applications often involve sensitive user data. In the fields of finance, medical care, social networking, etc., data privacy protection is particularly critical. AI Layer 1 should ensure verifiability while using encryption-based data processing technology, privacy computing protocols, and data rights management to ensure data security throughout the entire process of reasoning, training, and storage, effectively prevent data leakage and abuse, and eliminate users' worries about data security.
Strong ecological carrying and development support capabilities
As an AI-native Layer 1 infrastructure, the platform must not only be technologically advanced, but also provide comprehensive development tools, integrated SDKs, operation and maintenance support, and incentive mechanisms for ecosystem participants such as developers, node operators, and AI service providers. By continuously optimizing platform availability and developer experience, we will promote the implementation of rich and diverse AI-native applications and achieve the continued prosperity of the decentralized AI ecosystem.
Based on the above background and expectations, this article will introduce in detail six AI Layer1 representative projects including Sentient, Sahara AI, Ritual, Gensyn, Bittensor and 0G, systematically sort out the latest progress of the track, analyze the current status of project development, and explore future trends.
Sentient: Building a loyal open source decentralized AI model
Project Overview
Sentient is an open source protocol platform that is building an AI Layer1 blockchain (Layer 2 in the initial stage, and will be migrated to Layer 1 later). By combining AI Pipeline and blockchain technology, it is building a decentralized artificial intelligence economy. Its core goal is to solve the model ownership, call tracking and value distribution problems in the centralized LLM market through the "OML" framework (open, profitable, loyal), so that AI models can achieve on-chain ownership structure, call transparency and value sharing. Sentient's vision is to enable anyone to build, collaborate, own and monetize AI products, thereby promoting a fair and open AI Agent network ecosystem.
The Sentient Foundation team brings together the world's top academic experts, blockchain entrepreneurs and engineers, and is committed to building a community-driven, open-source and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI security and privacy protection respectively, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecological layout. The team members come from well-known companies such as Meta, Coinbase, Polygon, and top universities such as Princeton University and Indian Institute of Technology, covering fields such as AI/ML, NLP, and computer vision, and work together to promote the implementation of the project.
As the second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient has its own halo since its inception, with abundant resources, connections and market recognition, providing strong endorsement for the development of the project. In mid-2024, Sentient completed a seed round of financing of US$85 million, led by Founders Fund, Pantera and Framework Ventures, and other investment institutions including Delphi, Hashkey and Spartan and dozens of other well-known VCs.
Design architecture and application layer
1. Infrastructure layer
Core Architecture
Sentient's core architecture consists of two parts: AI Pipeline and blockchain system:
The AI pipeline is the foundation for developing and training “loyal AI” artifacts and consists of two core processes:
Data Curation: Community-driven data selection process for model alignment.
Loyalty Training: The training process that ensures the model remains aligned with the community’s intent.
The blockchain system provides transparency and decentralized control for the protocol, ensuring ownership, usage tracking, revenue distribution, and fair governance of AI artifacts. The specific architecture is divided into four layers:
Storage layer: stores model weights and fingerprint registration information;
Distribution layer: authorization contract controls the model calling entry;
Access layer: Verify user authorization through proof of authority;
Incentive layer: The revenue router contract distributes the per-call payment to trainers, deployers, and validators.
Sentient system workflow diagram
OML Model Framework
The OML framework (Open, Monetizable, Loyal) is the core concept proposed by Sentient, which aims to provide clear ownership protection and economic incentives for open source AI models. By combining on-chain technology and AI native cryptography, it has the following characteristics:
Openness: The model must be open source, with transparent code and data structure to facilitate community reproduction, auditing, and improvement.
Monetization: Each model call triggers a revenue stream, and the on-chain contract distributes the revenue to trainers, deployers, and validators.
Loyalty: The model belongs to the contributor community, upgrade direction and governance are determined by the DAO, and usage and modification are controlled by encryption mechanisms.
AI-native Cryptography
AI native encryption uses the continuity, low-dimensional manifold structure and differentiable characteristics of AI models to develop a lightweight security mechanism that is "verifiable but not removable". Its core technologies are:
Fingerprint embedding: insert a set of hidden query-response key-value pairs during training to form a unique signature of the model;
Ownership verification protocol: A third-party detector (Prover) verifies whether the fingerprint is retained in the form of a query;
Permissioned call mechanism: Before calling, you need to obtain a "permission certificate" issued by the model owner, and the system will then authorize the model to decode the input and return the correct answer.
This method can achieve "behavior-based authorization call + ownership verification" without the cost of re-encryption.
Model ownership confirmation and security execution framework
Sentient currently uses Melange hybrid security: fingerprint confirmation, TEE execution, and on-chain contract profit sharing. The fingerprint method is the main line of OML 1.0 implementation, emphasizing the idea of "optimistic security", that is, compliance by default, and detection and punishment after violations.
The fingerprint mechanism is a key implementation of OML. It embeds specific "question-answer" pairs to allow the model to generate unique signatures during the training phase. Through these signatures, model owners can verify attribution and prevent unauthorized copying and commercialization. This mechanism not only protects the rights and interests of model developers, but also provides a traceable on-chain record of model usage behavior.
In addition, Sentient launched the Enclave TEE computing framework, which uses a trusted execution environment (such as AWS Nitro Enclaves) to ensure that the model only responds to authorized requests and prevents unauthorized access and use. Although TEE relies on hardware and has certain security risks, its high performance and real-time advantages make it the core technology for current model deployment.
In the future, Sentient plans to introduce zero-knowledge proof (ZK) and fully homomorphic encryption (FHE) technologies to further enhance privacy protection and verifiability, and provide a more mature solution for the decentralized deployment of AI models.
OML proposes an evaluation and comparison of five verifiability methods
2. Application layer
Currently, Sentient's products mainly include the decentralized chat platform Sentient Chat, the open source model Dobby series and the AI Agent framework
Dobby Series Models
SentientAGI has released several "Dobby" series models, mainly based on the Llama model, focusing on the values of freedom, decentralization, and cryptocurrency support. Among them, the leaned version is more restrained and rational, suitable for scenarios with stable output; the unhinged version is more free and bold, with a richer conversational style. The Dobby model has been integrated into multiple Web3 native projects, such as Firework AI and Olas, and users can also call these models directly in Sentient Chat to interact. Dobby 70B is the most decentralized model ever, with more than 600,000 owners (people who hold Dobby fingerprint NFTs are also co-owners of the model).
Sentient also plans to launch Open Deep Search, a search agent system that attempts to surpass ChatGPT and Perplexity Pro. The system combines Sensient's search features (such as query restatement and document processing) with reasoning agents to improve search quality through open source LLMs (such as Llama 3.1 and DeepSeek). On the Frames Benchmark, its performance has surpassed other open source models and is even close to some closed source models, showing strong potential.
Sentient Chat: Decentralized chat and on-chain AI Agent integration
Sentient Chat is a decentralized chat platform that combines open source large-scale language models (such as the Dobby series) with an advanced reasoning agent framework to support multi-agent integration and complex task execution. The reasoning agent embedded in the platform can complete complex tasks such as search, calculation, code execution, etc., providing users with an efficient interactive experience. In addition, Sentient Chat also supports the direct integration of on-chain intelligent agents, currently including astrology Agent Astro247, cryptographic analysis Agent QuillCheck, wallet analysis Agent Pond Base Wallet Summary, and spiritual guidance Agent ChiefRaiin. Users can choose different intelligent agents to interact according to their needs. Sentient Chat will be used as a distribution and coordination platform for agents. Users' questions can be routed to any integrated model or agent to provide the best response results.
AI Agent Framework
Sentient provides two major AI Agent frameworks:
Sentient Agent Framework: A lightweight open source framework that focuses on automating web tasks (such as searching and playing videos) through natural language instructions. The framework supports the construction of intelligent agents with perception, planning, execution, and feedback loops, and is suitable for lightweight development of off-chain web tasks.
Sentient Social Agent: An AI system developed for social platforms such as Twitter, Discord, and Telegram that supports automated interactions and content generation. Through multi-agent collaboration, the framework can understand the social environment and provide users with a more intelligent social experience. It can also be integrated with the Sentient Agent Framework to further expand its application scenarios.
Ecosystem and Participation Methods
The Sentient Builder Program currently has a $1 million funding plan to encourage developers to use its development kit to build AI agents that can be accessed through the Sentient Agent API and run in the Sentient Chat ecosystem. The ecosystem partners announced on the Sentient official website include project teams in multiple fields of Crypto AI, as follows
Sentient Ecosystem Map
In addition, Sentient Chat is currently in the testing phase and requires an invitation code to enter the whitelist before it can be accessed. Ordinary users can submit a waitlist. According to official information, there are more than 50,000 users and 1,000,000 query records. There are 2,000,000 users waiting to join Sentient Chat's waitlist.
Challenges and prospects
Sentient starts from the model side and is committed to solving the core problems of misalignment and untrustworthiness faced by current large-scale language models (LLMs). Through the OML framework and blockchain technology, it provides the model with a clear ownership structure, usage tracking, and behavior constraints, which greatly promotes the development of decentralized open source models.
With the resource support of Polygon co-founder Sandeep Nailwal and the endorsement of top VCs and industry partners, Sentient is in a leading position in resource integration and market attention. However, in the context of the current market gradually disenchanting high-valuation projects, whether Sentient can deliver truly influential decentralized AI products will be an important test of whether it can become the standard for decentralized AI ownership. These efforts are not only related to Sentient's own success, but also have a far-reaching impact on the reconstruction of trust and decentralized development of the entire industry.
Sahara AI: Creating a decentralized AI world where everyone can participate
Project Overview
Sahara AI is a decentralized infrastructure created for the new paradigm of AI × Web3, dedicated to building an open, fair and collaborative AI economy. The project uses decentralized ledger technology to achieve on-chain management and transactions of data sets, models and intelligent agents, ensuring the sovereignty and traceability of data and models. At the same time, Sahara AI introduces a transparent and fair incentive mechanism so that all contributors, including data providers, annotators and model developers, can obtain tamper-proof income returns during the collaboration process. The platform also protects the ownership and ownership of AI assets by contributors through a permissionless "copyright" system, and encourages open sharing and innovation.
Sahara AI provides a one-stop solution from data collection, labeling to model training, AI Agent creation, AI asset trading and other services, covering the entire AI life cycle, and becoming a comprehensive ecological platform that meets the needs of AI development. Its product quality and technical capabilities have been highly recognized by top global companies and institutions such as Microsoft, Amazon, Massachusetts Institute of Technology (MIT), Motherson Group and Snap, demonstrating strong industry influence and wide applicability.
Sahara is not just a scientific research project, but a deep technology platform with a landing orientation jointly promoted by first-line technology entrepreneurs and investors. Its core architecture may become a key fulcrum for the landing of AI × Web3 applications. Sahara AI has received a total of US$43 million in investment support from leading institutions such as Pantera Capital, Binance Labs, and Sequoia China; it was co-founded by Sean Ren, a tenured professor at the University of Southern California and a 2023 Samsung researcher, and Tyler Zhou, former investment director of Binance Labs. The core team members come from top institutions such as Stanford University, University of California, Berkeley, Microsoft, Google, and Binance, integrating the deep accumulation of academia and industry.
Design Architecture
Sahara AI architecture diagram
1. Base layer
The basic layer of Sahara AI is divided into: 1. The on-chain layer is used for the registration and realization of AI assets, 2. The off-chain layer is used to run Agents and AI services. It is composed of the on-chain system and the off-chain system, responsible for the registration, confirmation, execution and income distribution of AI assets, and supports the trusted collaboration of the entire AI life cycle.
Sahara blockchain and SIWA testnet (on-chain infrastructure)
SIWA testnet is the first public version of Sahara blockchain. Sahara Blockchain Protocol (SBP) is the core of Sahara blockchain, a smart contract system built specifically for AI, which realizes the on-chain ownership, traceability and income distribution of AI assets. The core modules include asset registration system, ownership agreement, contribution tracking, authority management, income distribution, execution proof, etc., to build an "on-chain operating system" for AI.
AI Execution Protocol (Off-Chain Infrastructure)
To support the credibility of model operation and call, Sahara has also built an off-chain AI execution protocol system, combined with a trusted execution environment (TEE), to support Agent creation, deployment, operation and collaborative development. Each task execution automatically generates a verifiable record and uploads it to the chain to ensure that the entire process is traceable and verifiable. The on-chain system is responsible for registration, authorization and ownership records, while the off-chain AI execution protocol supports the real-time operation and service interaction of AI Agents. Since Sahara is cross-chain compatible, applications built on Sahara AI's infrastructure can be deployed on any chain, even off-chain.
2. Application layer
Sahara AI Data Service Platform (DSP)
The Data Service Platform (DSP) is the basic module of the Sahara application layer. Anyone can accept data tasks through Sahara ID, participate in data labeling, denoising and auditing, and obtain on-chain points rewards (Sahara Points) as a contribution certificate. This mechanism not only guarantees data traceability and ownership, but also promotes the formation of a closed loop of "contribution-reward-model optimization". Currently in the fourth season of the event, this is also the main way for ordinary users to participate in the contribution.
On this basis, in order to encourage users to submit high-quality data and services, by introducing a dual incentive mechanism, users can not only receive rewards from Sahara, but also receive additional returns from ecosystem partners, thus achieving one-time contribution and multiple benefits. For example, data contributors can continue to receive benefits once their data is repeatedly called by the model or used to generate new applications, and truly participate in the AI value chain. This mechanism not only extends the life cycle of data assets, but also injects strong momentum into collaboration and co-construction. For example, MyShell on BNB Chain generates customized data sets through DSP crowdsourcing to improve model performance, and users receive MyShell token incentives, forming a win-win closed loop.
AI companies can crowdsource customized data sets based on data service platforms and quickly get responses from data annotators around the world by publishing specialized data tasks. AI companies no longer need to rely solely on traditional centralized data suppliers to obtain high-quality annotated data on a large scale.
Sahara AI Developer Platform
Sahara AI Developer Platform is a one-stop AI building and operation platform for developers and enterprises, providing full-process support from data acquisition, model training to deployment execution and asset realization. Users can directly call high-quality data resources in Sahara DSP and use them for model training and fine-tuning; processed models can be combined, registered and listed on the AI market within the platform, and ownership confirmation and flexible authorization can be realized through the Sahara blockchain.
Studio also integrates decentralized computing capabilities to support model training and Agent deployment and operation, ensuring the security and verifiability of the computing process. Developers can also store key data and models, encrypt and manage them, and control permissions to prevent unauthorized access. Through Sahara AI AI Developer Platform, developers do not need to build their own infrastructure, and can build, deploy and commercialize AI applications at a lower threshold, and fully integrate into the on-chain AI economic system through protocol mechanisms.
AI Markerplace
Sahara AI Marketplace is a decentralized asset market for models, datasets, and AI Agents. It not only supports the registration, trading, and authorization of assets, but also builds a transparent and traceable revenue distribution mechanism. Developers can register their own models or collected datasets as on-chain assets, set flexible usage authorizations and profit sharing ratios, and the system will automatically perform revenue settlement based on the frequency of calls. Data contributors can also continue to receive profits due to repeated calls to their data, realizing "continuous monetization."
This market is deeply integrated with the Sahara blockchain protocol, and all asset transactions, calls, and profit sharing records will be verifiable on the chain to ensure clear asset ownership and traceable income. With this market, AI developers no longer rely on traditional API platforms or centralized model hosting services, but have an independent and programmable commercialization path.
3. Ecological layer
Sahara AI's ecosystem connects data providers, AI developers, consumers, enterprise users, and cross-chain partners. Whether you want to contribute data, develop applications, use products, or promote AI within your company, you can play a role and find a revenue model. Data annotators, model development teams, and computing power providers can register their resources as on-chain assets, and authorize and share profits through Sahara AI's protocol mechanism, so that every resource used can automatically receive a reward. Developers can connect data, train models, and deploy agents through a one-stop platform, and directly commercialize their results in the AI Marketplace.
Ordinary users do not need a technical background to participate in data tasks, use AI apps, collect or invest in on-chain assets, and become part of the AI economy. For enterprises, Sahara provides full-process support from data crowdsourcing, model development to private deployment and revenue realization. In addition, Sahara supports cross-chain deployment. Any public chain ecosystem can use the protocols and tools provided by Sahara AI to build AI applications, access decentralized AI assets, and achieve compatibility and expansion with the multi-chain world. This makes Sahara AI not just a single platform, but also an underlying collaboration standard for the on-chain AI ecosystem.
Ecological Progress
Since the project was launched, Sahara AI has not only provided a set of AI tools or computing power platforms, but also reconstructed the production and distribution order of AI on the chain, creating a decentralized collaborative network where everyone can participate, confirm ownership, contribute and share. For this reason, Sahara chose blockchain as the underlying architecture to build a verifiable, traceable and distributable economic system for AI.
Around this core goal, the Sahara ecosystem has made significant progress. While still in the private beta stage, the platform has generated more than 3.2 million on-chain accounts, and the number of daily active accounts has stabilized at more than 1.4 million, demonstrating user engagement and network vitality. Among them, more than 200,000 users have participated in data labeling, training, and verification tasks through the Sahara data service platform, and received on-chain incentive rewards. At the same time, there are still millions of users waiting to join the whitelist, which confirms the market's strong demand and consensus for decentralized AI platforms.
In terms of corporate cooperation, Sahara has established cooperation with leading global institutions such as Microsoft, Amazon, and Massachusetts Institute of Technology (MIT) to provide customized data collection and annotation services. Enterprises can submit specific tasks through the platform, and Sahara's global network of data annotators will efficiently execute them, realizing large-scale crowdsourcing, execution efficiency, flexibility, and diversified demand support.
Sahara AI Ecosystem Map
How to participate
SIWA will be launched in four phases. The first phase is currently laying the foundation for on-chain data ownership. Contributors can register and tokenize their own data sets. It is currently open to the public and does not require a whitelist. It is necessary to ensure that the uploaded data is useful for AI. Plagiarism or inappropriate content may be dealt with. The second phase realizes the on-chain monetization of data sets and models. The third phase opens the test network and open source protocols. The fourth phase launches AI data flow registration, traceability tracking and contribution proof mechanism.
SIWA Testnet
In addition to the SIWA testnet, ordinary users can now participate in Sahara Legends and learn about the functions of Sahara AI through gamified tasks. After completing the tasks, they will receive guardian fragments, and finally synthesize an NFT to record their contribution to the network.
Or you can annotate data on the data service platform, contribute valuable data, and serve as an auditor. Sahara plans to cooperate with ecological partners to release tasks in the future, so that participants can get incentives from ecological partners in addition to Sahara points. The first double reward task was held with Myshell, and users who completed the task could get Sahara points and Myshell token rewards. According to the roadmap, Sahara is expected to launch the mainnet in Q3 2025, and TGE may also be held at that time.
Challenges and prospects
Sahara AI makes AI no longer limited to developers or large AI companies, making AI more open, inclusive and democratic. For ordinary users, no programming knowledge is required to participate in contributions and gain benefits. Sahara AI creates a decentralized AI world that everyone can participate in. For technical developers, Sahara AI opens up the development path of Web2 and Web3, providing decentralized but flexible and powerful development tools and high-quality data sets.
For AI infrastructure providers, Sahara AI provides a new path for decentralized monetization of models, data, computing power and services. Sahara AI not only provides public chain infrastructure, but also core applications, using blockchain technology to promote the development of AI copyright systems. At this stage, Sahara AI has reached cooperation with many top AI institutions and achieved initial success. Whether it will be successful in the future should also be observed after the mainnet is launched, the development and adoption rate of ecological products, and whether the economic model can drive users to continue to contribute to the data set after TGE.
Ritual: Innovative design breaks through core AI challenges such as heterogeneous tasks
Project Overview
Ritual aims to solve the centralization, closedness and trust issues in the current AI industry, providing AI with a transparent verification mechanism, fair computing resource allocation and flexible model adaptation capabilities; allowing any protocol, application or smart contract to integrate a verifiable AI model in the form of a few lines of code; and through its open architecture and modular design, promote the widespread application of AI on the chain and create an open, secure and sustainable AI ecosystem.
Ritual completed a $25 million Series A financing in November 2023, led by Archetype, with participation from multiple institutions such as Accomplice and well-known angel investors, demonstrating market recognition and the team's strong social skills. Founders Niraj Pant and Akilesh Potti are both former partners of Polychain Capital, and have led investments in industry giants such as Offchain Labs and EigenLayer, demonstrating deep insight and judgment. The team has extensive experience in cryptography, distributed systems, AI, and other fields, and its advisory lineup includes founders of projects such as NEAR and EigenLayer, demonstrating its strong background and potential.
Design Architecture
From Infernet to Ritual Chain
Ritual Chain is a second-generation product that naturally transitions from the Infernet node network, representing a comprehensive upgrade of Ritual on the decentralized AI computing network. Infernet is the first-phase product launched by Ritual and will be officially launched in 2023. This is a decentralized oracle network designed for heterogeneous computing tasks, aiming to solve the limitations of centralized APIs and enable developers to call transparent and open decentralized AI services more freely and stably.
Infernet uses a flexible and simple lightweight framework. Due to its ease of use and efficiency, it quickly attracted more than 8,000 independent nodes to join after its launch. These nodes have diverse hardware capabilities, including GPUs and FPGAs, which can provide powerful computing power for complex tasks such as AI reasoning and zero-knowledge proof generation. However, in order to keep the system simple, Infernet gave up some key features, such as coordinating nodes through consensus or integrating a robust task routing mechanism. These limitations made it difficult for Infernet to meet the needs of a wider range of Web2 and Web3 developers, prompting Ritual to launch a more comprehensive and powerful Ritual Chain.
Ritual Chain is a next-generation Layer 1 blockchain designed specifically for AI applications. It aims to make up for the limitations of Infernet and provide developers with a more robust and efficient development environment. Through Resonance technology, Ritual Chain provides a simple and reliable pricing and task routing mechanism for the Infernet network, greatly optimizing resource allocation efficiency. In addition, Ritual Chain is based on the EVM++ framework, which is a backward-compatible extension of the Ethereum Virtual Machine (EVM) with more powerful features, including precompiled modules, native scheduling, built-in account abstraction (AA), and a series of advanced Ethereum Improvement Proposals (EIPs). These features together build a powerful, flexible and efficient development environment, providing developers with new possibilities.
Ritual Chain workflow diagram
Precompiled Sidecars
Compared with traditional precompilation, the design of Ritual Chain improves the scalability and flexibility of the system, allowing developers to create custom function modules in a containerized manner without modifying the underlying protocol. This architecture not only significantly reduces development costs, but also provides more powerful computing power for decentralized applications.
Specifically, Ritual Chain decouples complex computations from the execution client through a modular architecture and implements it in the form of independent Sidecars. These precompiled modules can efficiently handle complex computing tasks, including AI reasoning, zero-knowledge proof generation, and trusted execution environment (TEE) operations.
Native Scheduling
Native scheduling solves the needs of task timing triggering and conditional execution. Traditional blockchains usually rely on centralized third-party services (such as keeper) to trigger task execution, but this model has centralization risks and high costs. Ritual Chain completely gets rid of its dependence on centralized services through the built-in scheduler. Developers can set the entry point and callback frequency of smart contracts directly on the chain. Block producers will maintain a mapping table of pending calls and give priority to these tasks when generating new blocks. Combined with Resonance's dynamic resource allocation mechanism, Ritual Chain can efficiently and reliably handle computationally intensive tasks, providing stable protection for decentralized AI applications.
Technological innovation
Ritual's core technological innovations ensure its leading position in performance, verification, and scalability, providing strong support for on-chain AI applications.
1. Resonance: Optimizing resource allocation
Resonance is a bilateral market mechanism that optimizes blockchain resource allocation and solves the complexity of heterogeneous transactions. As blockchain transactions evolve from simple transfers to diversified forms such as smart contracts and AI reasoning, existing fee mechanisms (such as EIP-1559) are difficult to efficiently match user needs with node resources. Resonance achieves the best match between user transactions and node capabilities by introducing two core roles, Broker and Auctioneer:
Broker is responsible for analyzing the user's transaction fee willingness and the node's resource cost function to achieve the best match between transactions and nodes and improve the utilization of computing resources. Auctioneer organizes the distribution of transaction fees through a bilateral auction mechanism to ensure fairness and transparency. Nodes choose transaction types based on their own hardware capabilities, while users can submit transaction requirements based on priority conditions (such as speed or cost).
This mechanism significantly improves the network's resource utilization efficiency and user experience, while further enhancing the transparency and openness of the system through a decentralized auction process.
Under the Resonance mechanism: Auctioneer assigns appropriate tasks to nodes based on Broker's analysis
2. Symphony: Improving Verification Efficiency
Symphony focuses on improving verification efficiency and solving the inefficiency of the traditional blockchain "repeated execution" model in processing and verifying complex computing tasks. Based on the "execute once, verify many times" (EOVMT) model, Symphony greatly reduces the performance loss caused by repeated calculations by separating the calculation and verification processes. The calculation task is executed once by a designated node, and the calculation result is broadcast over the network. The verification node uses non-interactive proofs to confirm the correctness of the result without repeating the calculation.
Symphony supports distributed verification, breaking down complex tasks into multiple subtasks that are processed in parallel by different verification nodes, thereby further improving verification efficiency and ensuring privacy protection and security. Symphony is highly compatible with proof systems such as trusted execution environments (TEEs) and zero-knowledge proofs (ZKPs), providing flexible support for fast transaction confirmation and privacy-sensitive computing tasks. This architecture not only significantly reduces the performance overhead caused by repeated calculations, but also ensures the decentralization and security of the verification process.
Symphony breaks down complex tasks into multiple subtasks, which are processed in parallel by different verification nodes.
3. vTune: Traceable Model Validation
vTune is a tool provided by Ritual for model verification and source tracking. It has little impact on model performance and has good anti-interference capabilities. It is particularly suitable for protecting the intellectual property rights of open source models and promoting fair distribution. vTune combines watermarking technology and zero-knowledge proof to achieve model source tracking and computational integrity assurance by embedding hidden markers:
Watermarking technology: By embedding tags in weight space watermarks, data watermarks, or function space watermarks, the attribution of the model can be verified even if the model is public. In particular, function space watermarks can verify attribution through model outputs without accessing model weights, thereby achieving stronger privacy protection and robustness.
Zero - knowledge proof: Introducing hidden data during the model fine-tuning process to verify whether the model has been tampered with while protecting the rights and interests of the model creator.
This tool not only provides trusted source verification for the decentralized AI model market, but also significantly improves the security and ecological transparency of the model.
Ecological Development
Ritual is currently in the private testnet stage, and there are few opportunities for ordinary users to participate; developers can apply for and participate in the official Altar and Realm incentive programs, join Ritual's AI ecosystem construction, and obtain full-stack technical support and financial support from the official.
Currently, the official has announced a batch of native applications from the Altar project:
Relic: A machine learning-based automated market maker (AMM) that dynamically adjusts liquidity pool parameters through Ritual’s infrastructure to optimize fees and underlying pools;
Anima: Focuses on LLM-based on-chain transaction automation tools, providing users with a smooth and natural Web3 interaction experience;
Tithe: AI-driven lending protocol that supports a wider range of asset types by dynamically optimizing lending pools and credit scores.
In addition, Ritual has also carried out in-depth cooperation with multiple mature projects to promote the development of the decentralized AI ecosystem. For example, the cooperation with Arweave provides decentralized permanent storage support for models, data, and zero-knowledge proofs; through integration with StarkWare and Arbitrum, Ritual introduces native on-chain AI capabilities to these ecosystems; in addition, the re-staking mechanism provided by EigenLayer adds active verification services to Ritual's proof market, further enhancing the decentralization and security of the network.
Challenges and prospects
Ritual's design starts from key links such as distribution, incentives, and verification, solving the core problems faced by decentralized AI. At the same time, it realizes the verifiability of the model through tools such as vTune, breaking through the contradiction between model open source and incentives, and providing technical support for the construction of a decentralized model market.
At present, Ritual is in its early stages, mainly focusing on the inference stage of the model. The product matrix is expanding from infrastructure to model market, L2 as a service (L2aaS) and Agent framework. As the blockchain is still in the private testing stage, the advanced technical design proposed by Ritual still needs to be publicly implemented on a large scale and needs continuous attention. It is expected that with the continuous improvement of technology and the gradual enrichment of the ecosystem, Ritual will become an important part of the decentralized AI infrastructure.
Gensyn: Solving the core problem of decentralized model training
Project Overview
Against the backdrop of the accelerated evolution of artificial intelligence and increasingly scarce computing resources, Gensyn is trying to reshape the underlying paradigm of the entire AI model training.
In the traditional AI model training process, computing power is almost monopolized by a few cloud computing giants, with high training costs and low transparency, which hinders the innovation of small and medium-sized teams and independent researchers. Gensyn's vision is to break this "centralized monopoly" structure. It advocates "sinking" training tasks to countless devices with basic computing capabilities around the world - whether it is a MacBook, a gaming-grade GPU, or an edge device or an idle server, they can all access the network, participate in task execution, and get paid.
Gensyn was founded in 2020 and is focused on building decentralized AI computing infrastructure. As early as 2022, the team first proposed to redefine the training method of AI models at the technical and institutional levels: no longer relying on closed cloud platforms or giant server clusters, but sinking training tasks to heterogeneous computing nodes around the world to build a trustless intelligent computing network.
In 2023, Gensyn further expanded its vision: to build a globally connected, open-source, autonomous, permissionless AI network - any device with basic computing capabilities can become a part of this network. Its underlying protocol is based on the blockchain architecture design, which not only has the composability of incentive mechanisms and verification mechanisms.
Since its establishment, Gensyn has received a total of US$50.6 million in support from 17 institutions including a16z, CoinFund, Canonical, Protocol Labs, Distributed Global, etc. Among them, the Series A financing led by a16z in June 2023 has attracted widespread attention, marking that the field of decentralized AI has begun to enter the vision of mainstream Web3 venture capital.
The core members of the team also have impressive backgrounds: co-founder Ben Fielding studied theoretical computer science at Oxford University and has a solid technical research background; another co-founder Harry Grieve has long been involved in the system design and economic modeling of decentralized protocols, providing solid support for Gensyn's architectural design and incentive mechanism.
Design Architecture
The development of decentralized artificial intelligence systems is currently facing three core technical bottlenecks: execution, verification, and communication. These bottlenecks not only limit the release of large model training capabilities, but also restrict the fair integration and efficient use of global computing resources. Based on systematic research, the Gensyn team proposed three representative innovative mechanisms - RL Swarm, Verde, and SkipPipe, and built solutions for the above problems, respectively, promoting the decentralized AI infrastructure from concept to implementation.
1. Execution Challenge: How to enable fragmented devices to collaborate and efficiently train large models?
Currently, the performance improvement of large language models mainly relies on the "heap scale" strategy: larger parameters, wider data sets, and longer training cycles. However, this also significantly increases the computing cost - the training of super-large models often needs to be split into thousands of GPU nodes, and these nodes also need high-frequency data communication and gradient synchronization. In a decentralized scenario, the nodes are widely distributed, the hardware is heterogeneous, and the state volatility is high, so traditional centralized scheduling strategies are difficult to work.
To meet this challenge, Gensyn proposed RL Swarm, a peer-to-peer reinforcement learning post-training system. The core idea is to transform the training process into a distributed collaborative game. The mechanism is divided into three stages: "sharing-criticism-decision-making": first, the node independently completes the problem reasoning and publicly shares the results; then, each node evaluates the answers of its peers and provides feedback from the perspectives of logic and strategic rationality; finally, the node corrects its own output based on group opinions to generate more robust answers. This mechanism effectively combines individual computing with group collaboration, and is particularly suitable for tasks such as mathematics and logical reasoning that require high precision and verifiability. Experiments show that RL Swarm not only improves efficiency, but also significantly lowers the threshold for participation, and has good scalability and fault tolerance.
RL Swarm’s “Share-Criticize-Decision” three-stage reinforcement learning training system
2. Verification Challenge: How to verify whether the calculation results of untrusted suppliers are correct?
In a decentralized training network, "anyone can provide computing power" is both an advantage and a risk. The question is: how to verify whether these calculations are real and valid without trust?
Traditional methods such as recalculation or whitelist review have obvious limitations - the former is extremely costly and not scalable; the latter excludes "long tail" nodes and damages the openness of the network. For this reason, Gensyn designed Verde, a lightweight arbitration protocol built specifically for neural network training and verification scenarios.
The key idea of Verde is "minimum trusted arbitration": when the verifier suspects that the supplier's training results are wrong, the arbitration contract only needs to recalculate the first controversial operation node in the calculation graph without having to repeat the entire training process. This greatly reduces the verification burden while ensuring the correctness of the results when at least one party is honest. In order to solve the problem of floating-point non-determinism between different hardware, Verde has also developed a library of Reproducible Operators to enforce a unified execution order for common mathematical operations such as matrix multiplication, thereby achieving bit-level consistent output across devices. This technology significantly improves the security and engineering feasibility of distributed training, and is an important breakthrough in the current trustless verification system.
The entire mechanism is based on the trainer recording key intermediate states (i.e. checkpoints), and multiple verifiers are randomly assigned to reproduce these training steps to determine the consistency of the output. Once a verifier's recalculation results differ from those of the trainer, the system will not rerun the entire model roughly, but will use the network arbitration mechanism to accurately locate the operation where the two first disagreed in the computational graph, and only replay and compare the operation, thereby achieving dispute resolution with extremely low overhead. In this way, Verde ensures the integrity of the training process without the need to trust the training nodes, while taking into account efficiency and scalability. It is a verification framework tailored for distributed AI training environments.
Vader's workflow
3. Communication Challenge: How to reduce the network bottleneck caused by high-frequency synchronization between nodes?
In traditional distributed training, the model is either fully replicated or split by layer (pipeline parallelism), both of which require high-frequency synchronization between nodes. In particular, in pipeline parallelism, a micro-batch must pass through each layer of the model in strict order, resulting in the entire training process being blocked as long as a node is delayed.
To address this problem, Gensyn proposed SkipPipe: a highly fault-tolerant pipeline training system that supports skip execution and dynamic path scheduling. SkipPipe introduces a "skip ratio" mechanism that allows some micro-batches to skip some model layers when the load on a specific node is too high, and uses a scheduling algorithm to dynamically select the current optimal computing path. Experiments show that in a network environment with wide geographical distribution, large hardware differences, and limited bandwidth, SkipPipe training time can be reduced by up to 55%, and can still maintain only 7% loss when the node failure rate is as high as 50%, showing extremely strong resilience and adaptability.
How to participate
Gensyn's public testnet was launched on March 31, 2025, and is still in the early stages of its technical roadmap (Phase 0), with its functional focus on the deployment and verification of RL Swarm. RL Swarm is Gensyn's first application scenario, designed around the collaborative training of reinforcement learning models. Each participating node binds its behavior to its on-chain identity, and the contribution process is fully recorded, which provides a verification basis for subsequent incentive allocation and trusted computing models.
Gensyn's node ranking
The hardware threshold in the early testing phase is relatively friendly: Mac users can run it with M series chips, and Windows users are recommended to equip it with high-performance GPUs such as 3090 or 4090, and more than 16GB of memory, so that local Swarm nodes can be deployed. After the system is running, log in to your email through the web page (Gmail is recommended) to complete the verification process, and you can choose whether to bind HuggingFace's Access Token to activate more complete model capabilities.
Challenges and prospects
The biggest uncertainty of the Gensyn project at present is that its test network has not yet covered the promised full technology stack. Key modules such as Verde and SkipPipe are still in the process of being integrated, which also makes the outside world wait and see about its architecture landing capabilities. The official explanation is: the test network will be promoted in stages, each stage will unlock new protocol capabilities, and give priority to verifying the stability and scalability of the infrastructure. The first stage will start with RL Swarm, and in the future it will gradually expand to core scenarios such as pre-training and reasoning, and finally transition to the mainnet deployment that supports real economic transactions.
Although the testnet was launched at a relatively conservative pace, it is worth noting that only one month later, Gensyn launched a new Swarm test task that supports larger-scale models and complex mathematical tasks. This move to a certain extent responded to the outside world's doubts about its development pace and also demonstrated the team's execution efficiency in promoting local modules.
However, problems also arise: the new version of the task has set a very high threshold for hardware. The recommended configuration includes top GPUs such as A100 and H100 (80GB video memory), which is almost unattainable for small and medium-sized nodes, and also creates a certain tension with the original intention of "open access and decentralized training" emphasized by Gensyn. If the trend of centralized computing power is not effectively guided, it may affect the fairness of the network and the sustainability of decentralized governance.
Next, if Verde and SkipPipe can be smoothly integrated, it will help improve the integrity and coordination efficiency of the protocol. However, whether Gensyn can find a true balance between performance and decentralization still needs to be tested in the test network for a longer period of time and on a wider scale. At present, it has initially shown its potential and also exposed challenges, which is the most real state of an early infrastructure project.
Bittensor: Innovation and development of decentralized AI network
Project Overview
Bittensor is a groundbreaking project that combines blockchain and artificial intelligence. It was founded in 2019 by Jacob Steeves and Ala Shaabana to build a "market economy of machine intelligence." Both founders have a deep background in artificial intelligence and distributed systems. Yuma Rao, the author of the project's white paper, is considered to be the team's core technical advisor, and has injected professional perspectives in cryptography and consensus algorithms into the project.
The project aims to integrate global computing resources through blockchain protocols and build a distributed neural network ecosystem that continuously optimizes itself. This vision transforms digital assets such as computing, data, storage, and models into intelligent value streams, builds a new economic form, and ensures fair distribution of AI development dividends. Unlike centralized platforms such as OpenAI, Bittensor has established three core value pillars:
Breaking down data silos: Using the TAO token incentive system to promote knowledge sharing and model contribution
Market-driven quality evaluation: Introducing game theory mechanisms to screen high-quality AI models and achieve survival of the fittest
Network effect amplifier: the growth of participants is exponentially positively correlated with the value of the network, forming a virtuous cycle
In terms of investment layout, Polychain Capital has been incubating Bittensor since 2019 and currently holds TAO tokens worth about $200 million; Dao5 holds TAO worth about $50 million and is also an early supporter of the Bittensor ecosystem. In 2024, Pantera Capital and Collab Currency further increased their stakes through strategic investments. In August of the same year, Grayscale Group included TAO in its decentralized AI fund, marking the institutional investors' high recognition and long-term optimism about the project's value.
Design architecture and operation mechanism
Network Architecture
Bittensor builds a sophisticated network architecture consisting of four collaborative layers:
Blockchain layer: Built on the Substrate framework, it serves as the trust foundation of the network and is responsible for recording state changes and token issuance. The system generates new blocks every 12 seconds and issues TAO tokens according to the rules to ensure network consensus and incentive distribution.
Neuron layer: As the computing nodes of the network, neurons run various AI models to provide intelligent services. Each node clearly declares its service type and interface specifications through a carefully designed configuration file to achieve functional modularity and plug-and-play.
Synapse: The communication bridge of the network, dynamically optimizing the connection weights between nodes, forming a neural network-like structure, and ensuring efficient information transmission. Synapse also has a built-in economic model, and the interaction between neurons and service calls require payment of TAO tokens, forming a closed loop of value circulation.
Metagraph: As the global knowledge graph of the system, it continuously monitors and evaluates the contribution value of each node and provides intelligent guidance for the entire network. The metagraph determines the synaptic weights through precise calculations, which in turn affects resource allocation, reward mechanisms, and the influence of nodes in the network.
Bittensor's network framework
Yuma consensus mechanism
The network uses a unique Yuma consensus algorithm to complete a round of reward distribution every 72 minutes. The verification process combines subjective evaluation with objective measurement:
Manual scoring: Validators make subjective evaluations of the quality of miners’ outputs
Fisher Information Matrix: objectively quantifies the contribution of nodes to the overall network
This "subjective + objective" hybrid mechanism effectively balances professional judgment and algorithmic fairness.
Subnet architecture and dTAO upgrades
Each subnet focuses on a specific AI service area, such as text generation, image recognition, etc. It runs independently but remains connected to the main blockchain subtensor, forming a highly flexible modular expansion architecture. In February 2025, Bittensor completed the milestone dTAO (Dynamic TAO) upgrade, which transforms each subnet into an independent economic unit and intelligently regulates resource allocation through market demand signals. Its core innovation is the subnet token (Alpha token) mechanism:
How it works: Participants obtain Alpha tokens issued by each subnet by staking TAO. These tokens represent the market recognition and support resources for specific subnet services.
· Allocation logic: The market price of Alpha tokens is a key indicator to measure the intensity of subnet demand. In the initial state, the price of Alpha tokens of each subnet is consistent, and there is only 1 TAO and 1 Alpha token in each liquidity pool. As trading activity increases and liquidity is injected, the price of Alpha tokens is dynamically adjusted, and the allocation of TAO is intelligently allocated according to the proportion of subnet token prices. Subnets with high market popularity will receive more resources, realizing true demand-driven resource optimization configuration.
Bittensor Subnet Token Emission Distribution
The dTAO upgrade has significantly improved the vitality of the ecosystem and the efficiency of resource utilization. The total market value of the subnet token market has reached US$500 million, showing strong growth momentum.
Bittensor subnet alpha token value
Ecological progress and application cases
Mainnet Development History
The Bittensor network has gone through three key development stages:
January 2021: The mainnet is officially launched, laying the foundation for the infrastructure
October 2023: "Revolution" upgrade introduces subnet architecture to achieve functional modularization
February 2025: Complete dTAO upgrade and establish a market-driven resource allocation mechanism
The subnet ecosystem is growing explosively: as of June 2025, there are 119 specialized subnets, and it is expected that the number may exceed 200 within the year.
Number of Bittensor subnets
The types of ecological projects are diversified, covering multiple cutting-edge fields such as AI agents (such as Tatsu), prediction markets (such as Bettensor), DeFi protocols (such as TaoFi), etc., forming an innovative ecology with deep integration of AI and finance.
Representative subnet ecological projects
· TAOCAT: TAOCAT is a native AI agent in the Bittensor ecosystem, built directly on subnets, providing users with data-driven decision-making tools. It uses the large language model of Subnet 19, real-time data from Subnet 42, and Agent Arena from Subnet 59 to provide market insights and decision support. It received investment from DWF Labs, was included in its $20 million AI agent fund, and was launched on binance alpha.
· OpenKaito: A subnet launched by the Kaito team on Bittensor, aiming to build a decentralized search engine for the crypto industry. It has indexed 500 million web resources, demonstrating the powerful ability of decentralized AI to process massive amounts of data. Compared with traditional search engines, its core advantage is to reduce interference from commercial interests, provide more transparent and neutral data processing services, and provide a new paradigm for information acquisition in the Web3 era.
· Tensorplex Dojo: Subnet 52 developed by Tensorplex Labs, focusing on crowdsourcing high-quality human-generated datasets through a decentralized platform, encouraging users to earn TAO tokens through data annotation. In March 2025, YZi Labs (formerly Binance Labs) announced an investment in Tensorplex Labs to support the development of Dojo and Backprop Finance.
CreatorBid: Running on Subnet 6, it is a creation platform that combines AI and blockchain, integrating with Olas and other GPU networks (such as io.net) to support content creators and AI model development.
Technology and industry cooperation
Bittensor has made breakthrough progress in cross-domain collaboration:
Established a deep model integration channel with Hugging Face to achieve seamless on-chain deployment of 50 mainstream AI models
In 2024, we jointly released the BTLM-3B model with Cerebras, a high-performance AI chip manufacturer, with cumulative downloads exceeding 160,000 times
In March 2025, we reached a strategic cooperation with DeFi giant Aave to jointly explore the application scenarios of rsTAO as a high-quality lending collateral
How to participate
Bittensor has designed a diversified ecological participation path to form a complete value creation and distribution system:
Mining: Deploy miner nodes to produce high-quality digital goods (such as AI model services) and obtain TAO rewards based on the quality of contribution
Verification: Run the validator node to evaluate the work of miners, maintain network quality standards, and obtain corresponding TAO incentives
· Staking: Hold and stake TAO to support high-quality validator nodes and obtain passive income based on the validator's performance
· Development: Use the Bittensor SDK and CLI tools to build innovative applications, utilities, or new subnets, and actively participate in ecosystem construction
Use services: Use AI services provided by the network through a friendly client application interface, such as text generation or image recognition
· Trading: Participate in the market trading of subnet asset tokens to capture potential value growth opportunities
Distribution of subnet alpha tokens to participants
Challenges and prospects
Although Bittensor has shown great potential, as a cutting-edge technology exploration, it still faces multi-dimensional challenges. On the technical level, the security threats faced by distributed AI networks (such as model theft and adversarial attacks) are more complex than those of centralized systems, and privacy computing and security protection solutions need to be continuously optimized; in terms of economic models, there is inflationary pressure in the early stage, and the subnet token market is highly volatile, so we need to be vigilant about possible speculative bubbles; in terms of the regulatory environment, although the SEC has classified TAO as a utility token, differences in regulatory frameworks in various regions around the world may still limit ecological expansion; at the same time, facing fierce competition from centralized AI platforms with abundant resources, decentralized solutions need to prove their long-term competitive advantages in terms of user experience and cost-effectiveness.
As the 2025 halving cycle approaches, Bittensor will focus on four strategic directions: further deepening the specialized division of labor among subnets, improving the service quality and performance of vertical applications; accelerating the deep integration with the DeFi ecosystem, and expanding the application boundaries of smart contracts with the help of the newly introduced EVM compatibility; through the dTAO mechanism, the network governance weight will be gradually transferred from TAO to Alpha tokens in the next 100 days to promote the process of decentralized governance; at the same time, actively expand interoperability with other mainstream public chains, expand ecological boundaries and application scenarios. These strategic initiatives for coordinated development will jointly promote Bittensor to steadily move towards the grand vision of "machine intelligence market economy".
0G: A modular AI ecosystem based on storage
Project Overview
0G is a modular Layer 1 public chain designed for AI applications, aiming to provide efficient and reliable decentralized infrastructure for data-intensive and high-computing demand scenarios. Through modular architecture, 0G achieves independent optimization of core functions such as consensus, storage, computing, and data availability, supports dynamic expansion, and can efficiently handle large-scale AI reasoning and training tasks.
The founding team consists of Michael Heinrich (CEO, who founded Garten which raised over $100 million), Ming Wu (CTO, Microsoft researcher, co-founder of Conflux), Fan Long (co-founder of Conflux) and Thomas Yao (CBO, Web3 investor). It has 8 PhDs in computer science and members have backgrounds in Microsoft, Apple, etc., and have deep experience in blockchain and AI technology.
In terms of financing, 0G Labs completed a $35 million Pre-seed round and a $40 million Seed round, totaling $75 million, with investors including Hack VC, Delphi Ventures and Animoca Brands. In addition, 0G Foundation received a $250 million token purchase commitment, $30.6 million in public node sales and $88.88 million in ecological funds.
Design Architecture
1. 0G Chain
0G Chain aims to build the fastest modular AI public chain. Its modular architecture supports independent optimization of key components such as consensus, execution, and storage, and integrates data availability networks, distributed storage networks, and AI computing networks. This design provides the system with excellent performance and flexibility in dealing with complex AI application scenarios. The following are the three core features of 0G Chain:
Modular Scalability for AI
0G adopts a horizontally scalable architecture that can efficiently handle large-scale data workflows. Its modular design separates the data availability layer (DA layer) from the data storage layer, providing higher performance and efficiency for data access and storage for AI tasks such as large-scale training or reasoning.
0G Consensus
0G's consensus mechanism consists of multiple independent consensus networks that can be dynamically expanded based on demand. As the amount of data grows exponentially, the system throughput can also be improved synchronously, supporting expansion from 1 to hundreds or even thousands of networks. This distributed architecture not only improves performance, but also ensures the flexibility and reliability of the system.
Shared Staking
Validators need to stake funds on the Ethereum mainnet to provide security for all participating 0G consensus networks. If a punishable event occurs on any 0G network, the staked funds of the validator on the Ethereum mainnet will be reduced. This mechanism extends the security of the Ethereum mainnet to all 0G consensus networks, ensuring the security and robustness of the entire system.
0G Chain has EVM compatibility, ensuring that developers of Ethereum, Layer 2 Rollup or other chains can easily integrate 0G's services (such as data availability and storage) without migration. At the same time, 0G is also exploring support for Solana VM, Near VM and Bitcoin compatibility so that AI applications can be expanded to a wider user group.
2. 0G Storage
0G Storage is a highly optimized distributed storage system designed for decentralized applications and data-intensive scenarios. At its core, it uses a unique consensus mechanism, Proof of Random Access (PoRA), to incentivize miners to store and manage data, thereby achieving a balance between security, performance, and fairness.
Its architecture can be divided into three layers:
Log Layer: Enables permanent storage of unstructured data, suitable for archiving or data logging purposes.
Key-Value Layer: manages mutable structured data and supports permission control, suitable for dynamic application scenarios.
Transaction Layer: Supports concurrent writing by multiple users, improving collaboration and data processing efficiency.
Proof of Random Access (PoRA) is a key mechanism of 0G Storage, which is used to verify whether miners have correctly stored the specified data blocks. Miners will periodically accept challenges and provide valid cryptographic hashes as proof, similar to proof of work. To ensure fair competition, 0G limits the data range of each mining operation to 8 TB to prevent large-scale operators from monopolizing resources, and small-scale miners can also participate in the competition in a fair environment.
Random access proof diagram
Through erasure coding technology, 0G Storage divides data into multiple redundant small fragments and distributes them to different storage nodes. This design ensures that even if some nodes are offline or fail, the data can still be fully recovered, which not only significantly improves the availability and security of data, but also enables the system to perform well when processing large-scale data. In addition, data storage is managed at the sector level and data block level in a refined manner, which not only optimizes data access efficiency, but also enhances the competitiveness of miners in the storage network.
The submitted data is organized in a sequential manner, which is called a data flow, which can be understood as a list of log entries or a sequence of fixed-size data sectors. In 0G, each piece of data can be quickly located by a common offset, thereby achieving efficient data retrieval and challenge queries. By default, 0G provides a general data flow called the main flow to handle most application scenarios. At the same time, the system also supports specialized flows, which specifically accept specific categories of log entries and provide independent continuous address spaces to optimize for different application requirements.
Through the above design, 0G Storage can flexibly adapt to a variety of usage scenarios while maintaining efficient performance and management capabilities, providing strong storage support for AI x Web3 applications that need to process large-scale data streams.
3. 0G Data Availability (0G DA)
Data Availability (DA) is one of the core components of 0G, which aims to provide accessible, verifiable and retrievable data. This function is key to decentralized AI infrastructure, such as verifying the results of training or reasoning tasks to meet user needs and ensure the reliability of the system incentive mechanism. 0G DA achieves excellent scalability and security through a carefully designed architecture and verification mechanism.
The design goal of 0G DA is to provide extremely high scalability while ensuring security. Its workflow is mainly divided into two parts:
Data Storage Lane: Data is divided into multiple small fragments ("data blocks") through erasure coding technology and distributed to storage nodes in the 0G Storage network. This mechanism effectively supports large-scale data transmission while ensuring data redundancy and recoverability.
Data Publishing Lane: The availability of data is verified by DA nodes through aggregate signatures, and the results are submitted to the consensus network. With this design, data publishing only needs to process a small number of key data streams, avoiding the bottleneck problem in traditional broadcasting methods, thereby significantly improving efficiency.
In order to ensure the security and efficiency of data, 0G DA uses a randomness-based verification method combined with an aggregate signature mechanism to form a complete verification process:
Randomly construct a quorum: Through a Verifiable Random Function (VRF), the consensus system randomly selects a group of DA nodes from the validator set to form a quorum. This random selection method theoretically ensures that the honesty distribution of the quorum is consistent with the entire validator set, so data availability clients cannot collude with the quorum.
Aggregate signature verification: The quorum group verifies the stored data blocks