💙 Gate Square #Gate Blue Challenge# 💙
Show your limitless creativity with Gate Blue!
📅 Event Period
August 11 – 20, 2025
🎯 How to Participate
1. Post your original creation (image / video / hand-drawn art / digital work, etc.) on Gate Square, incorporating Gate’s brand blue or the Gate logo.
2. Include the hashtag #Gate Blue Challenge# in your post title or content.
3. Add a short blessing or message for Gate in your content (e.g., “Wishing Gate Exchange continued success — may the blue shine forever!”).
4. Submissions must be original and comply with community guidelines. Plagiarism or re
Trusta.AI launches the SIGMA framework to build a trusted identification infrastructure for AI Agents.
Trusta.AI: Bridging the Trust Gap in the Human-Machine Era
Introduction
With the rapid maturation of AI infrastructure and the development of multi-agent collaboration frameworks, AI-driven on-chain agents are quickly becoming the main force in Web3 interactions. It is expected that within the next 2-3 years, these AI agents with autonomous decision-making capabilities may replace 80% of on-chain human behaviors, becoming true on-chain "users."
The emergence of AI Agents marks a shift in the Web3 ecosystem from a "human-centric" model to a new paradigm of "human-machine symbiosis." However, the rapid rise of AI Agents has also brought unprecedented challenges: how to identify and authenticate the identities of these agents? How to assess the credibility of their actions? How to ensure that these agents are not abused, manipulated, or used for attacks?
Therefore, establishing an on-chain infrastructure that can verify the identity and reputation of AI Agents has become a core proposition for the next phase of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.
Project Analysis
Project Introduction
Trusta.AI is dedicated to building Web3 identity and reputation infrastructure through AI.
Trusta.AI has launched the first Web3 user value assessment system - MEDIA reputation score, building the largest real-person certification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person certification services for top public chains and leading protocols such as Linea, Starknet, Celestia, Arbitrum, and Manta. Over 2.5 million on-chain certifications have been completed on mainstream chains such as Linea, BSC, and TON, making it the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism of identity creation, identity quantification, and identity protection to achieve on-chain financial services and on-chain social interactions for AI Agents, building a reliable trust foundation for the era of artificial intelligence.
Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play a crucial role, as they can not only interact and transact on-chain but also perform complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is essential to the core of decentralized trust. Without a reliable identity authentication mechanism, these agents are vulnerable to manipulation, fraud, or abuse. This is why the multiple application attributes of AI Agents in social, financial, and governance aspects must be built on a solid foundation of identity authentication.
As a pioneer in the field, Trusta.AI, with its leading technological strength and rigorous credit system, has taken the lead in establishing a comprehensive AI Agent DID certification mechanism, providing a solid guarantee for the trustworthy operation of intelligent agents, effectively preventing potential risks and promoting the steady development of the Web3 smart economy.
Financing Status
January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, and others.
June 2025: Completed a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.
Team Situation
Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identity Platform.
Simon: Co-founder and CTO, former head of AI Security Lab at Ant Group, with fifteen years of experience applying artificial intelligence technology to security and risk management.
The team has a solid technical foundation and practical experience in artificial intelligence and security risk control, payment system architecture, and authentication mechanisms. They have been committed to the deep application of big data and intelligent algorithms in security risk control for a long time, as well as security optimization in the design of underlying protocols and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.
Technical Architecture
Identity Establishment - DID + TEE
Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain, and securely stores it in a Trusted Execution Environment (TEE). In this black box environment, key data and computing processes are completely hidden, sensitive operations remain private at all times, and external parties cannot peek into the internal workings, effectively building a solid barrier for the information security of AI Agents.
For agents that were generated before the plugin integration, we rely on the comprehensive scoring mechanism on the blockchain for identity recognition; whereas for agents that are newly integrated with the plugin, they can directly obtain the "identity certification" issued by the DID, thereby establishing an AI Agent identity system that is self-controllable, authentic, and immutable.
Identity Quantification - First SIGMA Framework
The Trusta team always adheres to the principles of rigorous evaluation and quantitative analysis, committed to creating a professional and trustworthy identity verification system.
The Trusta team initially built and verified the effectiveness of the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions, namely: Interaction Amount ( Monetary ), Engagement (, Diversity ), Identity (, and Age ).
MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive evaluation dimensions and rigorous methodology, it has been widely adopted by leading public chains such as Celestia, Starknet, Arbitrum, Manta, and Linea as an important reference standard for airdrop eligibility screening. It not only focuses on interaction amounts but also covers multi-dimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project teams identify high-value users accurately and improve the efficiency and fairness of incentive distribution, fully reflecting its authority and wide recognition in the industry.
Based on the successful establishment of the human user evaluation system, Trusta has migrated and upgraded the experience of the MEDIA Score to the AI Agent scenario, establishing a Sigma evaluation system that better aligns with the behavioral logic of intelligent agents.
The Sigma scoring mechanism constructs a logical closed-loop evaluation system from "capability" to "value" based on five dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, which better meets the needs of AI Agents.
First, based on professional competence ( Specification ), the degree of participation ( Engagement ) reflects whether it is stable and continuously invested in practical interaction, which is a key support for building subsequent trust and effectiveness. Influence ( Influence ) is the reputation feedback generated in the community or network after participation, representing the credibility of the agent and the dissemination effect. Monetary ( Monetary ) assesses whether it has value accumulation capability and financial stability in the economic system, laying the foundation for a sustainable incentive mechanism. Ultimately, the adoption rate ( Adoption ) is used as a comprehensive reflection, representing the degree of acceptance of the agent in actual use, which is the final verification of all prior capabilities and performance.
This system is layered and structured clearly, allowing for a comprehensive reflection of the overall quality and ecological value of AI Agents, thereby achieving a quantitative assessment of AI performance and value, converting abstract pros and cons into a concrete, measurable scoring system.
Currently, the SIGMA framework has advanced cooperation with well-known AI Agent networks such as Virtual, Elisa OS, and Swarm, demonstrating its enormous application potential in AI agent identity management and reputation system construction, and is gradually becoming the core engine driving the construction of trusted AI infrastructure.
( Identity Protection - Trust Evaluation Mechanism
In a truly resilient and highly trustworthy AI system, the most critical aspect is not only the establishment of identity but also the continuous verification of that identity. Trusta.AI introduces a continuous trust assessment mechanism that can monitor authenticated intelligent agents in real time to determine if they are being illegally controlled, attacked, or subjected to unauthorized human intervention. The system identifies potential deviations during the agent's operation through behavioral analysis and machine learning, ensuring that each agent's actions remain within established policies and frameworks. This proactive approach ensures immediate detection of any deviations from expected behavior and triggers automatic protective measures to maintain the integrity of the agents.
Trusta.AI has established a security guard mechanism that is always online, continuously monitoring each interaction process to ensure that all operations comply with system specifications and established expectations.
![Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction])https://img-cdn.gateio.im/webp-social/moments-c9102964951f3901be2d05823e40c460.webp(
Product Introduction
) AgentGo
Trusta.AI assigns decentralized identity identifiers to each on-chain AI Agent ###DID(, and evaluates and indexes them based on on-chain behavioral data, creating a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter high-quality agents, enhancing their experience. Currently, Trusta has completed the collection and identification of AI Agents across the network and has issued decentralized identifiers for them, establishing a unified summary index platform AgentGo, further promoting the healthy development of the intelligent agent ecosystem.
Through the Dashboard provided by Trusta.AI, human users can easily retrieve the identity and credibility score of a specific AI Agent to determine its trustworthiness.
AI can directly read the index interface between each other, enabling quick verification of each other's identity and credibility, ensuring the security of collaboration and information exchange.
The AI Agent DID is no longer just an "identity"; it has become the underlying support for building core functions such as trusted collaboration, financial compliance, and community governance, making it an essential infrastructure for the development of the AI-native ecosystem. With the establishment of this system, all verified safe and trustworthy nodes form a tightly interconnected network, achieving efficient collaboration and functional interconnection among AI Agents.
Based on Metcalfe's Law, the value of the network will grow exponentially, thereby promoting the construction of a more efficient, trust-based, and collaborative AI Agent ecosystem, achieving resource sharing, capability reuse, and continuous value addition among agents.
AgentGo, as the first trusted identity infrastructure for AI Agents, is providing indispensable core support for building a highly secure and highly collaborative intelligent ecosystem.
![Trusta.AI: Bridging the Trust Gap Between Humans and Machines])https://img-cdn.gateio.im/webp-social/moments-74a45e59abcbe73d36652ffbba4becae.webp###
( TrustGo
TrustGo is an on-chain identity management tool developed by Trusta. It provides scoring based on information such as the current interaction, wallet "age", transaction volume, and transaction amount. In addition, TrustGo also offers parameters related to on-chain value rankings, making it easier for users to actively seek airdrops and improve their ability to receive airdrops/track them.
The existence of the MEDIA Score in the TrustGo evaluation mechanism is crucial, as it provides users with the ability to self-assess their activities. The evaluation system of the MEDIA Score not only includes simple metrics such as the number and amount of user interactions with smart contracts, protocols, and dApps, but also focuses on the user's behavioral patterns. Through the MEDIA Score, users can gain a deeper understanding of their activities.