Overview
NFTs are pretty cool, but how can we make them smarter? I’ve recently been assigned the task of implementing an on-chain solution for attaching intelligent metadata to new or existing NFTs and SFTs on The Root Network. This solution is a small piece of a larger puzzle that will allow AI and Blockchain tech to coincide, using each other’s strengths and helping us move one step closer to a truly open and intelligent metaverse.
What makes a token intelligent?
Intelligence itself is a term used relatively loosely. I hope we can come to an agreement that the human race is intelligent and even stretch to including any living thing. Anyone who’s ever owned a dog will relentlessly swear by their intelligence, but where is the line drawn? Before the rise of AI this may have been an easier question to answer, but recently the definition of intelligence is far from what it was even 5 years ago. Alan Turing used human intelligence as the baseline, stating that any machine capable of obscuring its digital soul from a human judge is thereafter considered intelligent. This test seemed impossible to beat in 1950, however less than a century later the field of AI has progressed far beyond what was ever conceivable. AI today is far more than just a convincing conversationalist, but a concept artist, a personal composer, a peer programmer and much much more. Congratulations AI, as humans we can confidently say that you are without a doubt intelligent.
OK hear me out, I can already sense your loud thoughts questioning the relevance of this article, especially when written by a blockchain developer like myself. I mean aren’t AI and Blockchain two completely separate fields?
Although AI and Blockchain are definitely two very different beasts, there are far more overlapping regions than you may initially think. At Futureverse we aim to explore these areas, envisioning a digital world powered by decentralized ownership and intelligent automation. We’re building tools and experiences for the open metaverse, not just for a tech-curious gamer, but for an entire planet of curious minds and intelligent individuals. Part of this goal is to provide limitless capability to an existing set of digital assets we know as NFTs. These Non-Fungible Tokens have proven to the world that they are valuable, and have already stirred up more interest and controversy than most modern day pop stars, but their time is not over. By bringing the power of an almost human equivalent AI brain to these characters, we can see two massive worlds collide, producing splendid outcomes in an ever expanding digital landscape.
Merging AI and Blockchain
Sounds easy right? All we have to do is take the pre-existing NFTs on The Root Network, slap on some intelligence that would have made Alan Turing proud, and call it a day! Although I wouldn’t choose the word easy to describe this task, the role of the chain itself is only a small part of the big picture. The unfathomable complexity of our AI models is not suitable for 100% on-chain representation, it’s simply not what a blockchain is designed to do. We need to somehow design a system that allows any end user application to retrieve the intelligent data associated with a specific NFT (represented by a token ID) and apply its different aspects within their game or experience. For the keen eyed amongst you, this may seem eerily familiar, as it shares a lot of similarities with the pre-existing metadata system.
For some background, when building protocol level systems, every Byte is valuable. The end user is charged for each nano-second of processing power their transactions take to execute so efficiency is key. This means that storing Gigabytes of data per token is not feasible. For this reason, the actual NFT data such as images, videos, 3D models, and other relevant fields are stored behind a link to a third party API such as IPFS or other data hosting platform. That way the only data we need to store on-chain is the url for the data and not the data itself.
This is the same for on-chain intelligence, each individual NFI agent contains the following fields:
- Genome Matrix: Static data that represents the initial state, or DNA of an AI agent, its fundamental behaviour and attributes stem from this matrix.
- Emotional Palette and Skills Matrix: A personality and attribute profiling index for each individual agent, derived from their unique genome mapping. These provide each agent with consistent behaviours and interaction patterns, a key component of multi-app interoperability and continuity.
- Murmur Matrix Card: The Murmur Matrix is a mutable data layer used to store contextual information about an intelligent asset. This includes recording its interaction data and in-world experiences, often referred to as memories.
The Murmur Matrix, Emotional Palette and Skills Matrix are mutable and evolve over the tokens lifetime, so this part will be stored entirely off-chain. The Genome Matrix however still ends up being around 2KB per token when heavily compressed. For a collection of 10,000 tokens that would be 2MB which is almost the size of the Runtime itself! Not to mention the added performance cost of compressing and decompressing the data each time we access it. We can’t realistically store all of this on chain, so we need to provide a link for each tokens data, easily retrievable by any third party that wishes to take advantage of it. Pushing the intelligence metadata off-chain also makes it far easier for the AI model to grow and change as it learns to adapt to each new app it gets used in.
Sounds simple enough right? Although one thing to consider is the fact that this data is now contained within some third party outside of our chains strict and verified ledger, we need some way to validate that this data is what we say it is. That’s where the verification hash comes into play. The goal of the added verification_hash
field is to provide a sha256
hash of the static contents of the data behind the metadata link, this hash is immutable so if at any point the data stored within the link is changed, it will no longer be verified and therefore considered invalid.
pub struct NFIMatrix {
// Link to metadata stored on external servers
pub metadata_link: BoundedVec<u8, MaxDataLength>,
// Hash to verify integrity of metadata content
pub verification_hash: H256,
}
Please Welcome, Pallet NFI
We know what we need to do and roughly how to do it, but what’s the correct way to implement this new piece of work into The Root Network? Adding intelligence to an NFT sounds like it’s very closely tied to the NFT Pallet, so attaching this extra logic in there would almost make sense, but we want to include intelligence for SFTs as well. If you read my article on the Marketplace Pallet you will know that we prefer to keep the role of a pallet fairly concise and organised, separating logic out wherever it makes sense. That’s why we decided to encapsulate all of the new logic into a new pallet, pallet_nfi
.
This pallet will store all of the metadata links, allow collection owners to make their collections NFI enabled and manage and charge the added intelligence fees, redistributing a percentage back to the network through the Vortex. We can then link to this pallet from both the NFT and SFT pallets whenever a new token is minted, check whether the collection is NFI compatible and if it is, charge the fee and create the intelligence metadata.
Beauty! Now we’re done right? We have our shiny new NFI Pallet linked up to our Runtime, ready to receive requests when a token’s minted. The pallet also charges the fees with redistribution so we can finally call it a day! But wait, we’re missing one crucial step, the actual metadata generation. I mentioned earlier that metadata was far too big to be stored on chain, but this goes for the generation as well. It’s not easy to create a Genome, so we need some off-chain service that can pick up requests from the chain, generate the intelligence metadata and then notify the chain about where that metadata is stored.
Enlisting some off-chain assistance
We’ll call this service the Relayer, its job is to listen to the NFI Pallet for any events requesting some new intelligent metadata to be generated for a token. When it picks up on an event (after a token is minted) it will then generate the Genome Matrix and Angle Matrix which are the key components required to create an AI brain that can learn and have unique attributes. After the Relayer stores this information somewhere, it’ll provide a URL link to that data, alongside a hash of the static contents, back to the NFI pallet by calling the submit_nfi_data
extrinsic. The NFI Pallet verifies that the caller is our trusted relayer and will store the mapping between token_id and intelligence data safely in its pallet store.
Multi-chain Token Support
So far this solution is working great for tokens on The Root Network, but we don’t discriminate here, what if we want to allow for NFI data to be stored for tokens across any chain? This way we would greatly remove the barrier for entry of games or experiences that have new or pre-existing collections across multiple chains, without requiring them to bridge them over to The Root Network. This can be achieved relatively easily, however not every chain represents tokens the same which adds to the complexity. Let’s look at tokens on The Root Network. Collections and serial numbers are represented by an unsigned 32-bit integer, so you’d have something like this:
// The Root Network Token ID
TokenId {
CollectionId: 1124,
SerialNumber: 12
}
But now let’s look at Ethereum, where instead of collection IDs, tokens are grouped by contract addresses, which is the address for the ERC721 Smart Contract containing the logic for those tokens. Serial numbers can also be 256-bit unsigned integers. This would look like:
// Ethereum Token ID
TokenId {
CollectionAddress: 0xccc441ac31f02cd96c153db6fd5fe0a2f4e6a68d,
TokenId: 12 // u256
}
These are the two most common types, but what about tokens on chains like Avalanche, Sui, Arbitrum and chains that don’t even exist yet? Is it possible to represent tokens on all of these chains with one data structure?
Although it may be impossible to create a type that supports every future chain, we can take a guess at the data types and support all existing types with a data structure that allows for flexibility of both the collection and token identifiers. The chain_id is included as well to easily distinguish which chain the token belongs to.
/// Token Id that can support many types of collection_id and serial_number
pub struct MultiChainTokenId<MaxByteLength: Get<u32>> {
pub chain_id: u64,
pub collection_id: GenericCollectionId<MaxByteLength>,
pub serial_number: GenericSerialNumber<MaxByteLength>,
}
/// Collection ID type that supports multiple chains
pub enum GenericCollectionId<MaxByteLength: Get<u32>> {
U32(u32),
U64(u64),
U128(u128),
U256(U256),
H160(H160),
H256(H256),
Bytes(BoundedVec<u8, MaxByteLength>),
Empty,
}
/// Serial Number type that supports multiple chains
pub enum GenericSerialNumber<MaxByteLength: Get<u32>> {
U32(u32),
U64(u64),
U128(u128),
U256(U256),
Bytes(BoundedVec<u8, MaxByteLength>),
}
Handling Edge Cases
While this pallet’s systems contain relatively simple logic, there are always edge cases that need to be addressed. Picture this, you create an NFT collection and your initial mint run sees 5,000 lucky people buy into your project. You decide later that you would like to leverage the new NFI Pallet and introduce AI components to your planned experience. You head over to our new friend the NFI Pallet and enable NFI for your collection, but what happens to those 5000 tokens that already exist? Do they live on the chain as dumb tokens with no intelligent metadata? Does the collection owner pay for each one to have their metadata generated? Or does our relayer swallow the cost and just generate them without charging anyone?
None of these solutions are ideal, but we still want to allow NFI to be enabled on existing collections. The best solution, and the one we went with, is adding a manual step, callable by the token owner that allows them to request for NFI metadata to be generated on their existing token. All other NFI metadata is handled by an automated process on mint, but by allowing the user to fill in the gaps we can bring intelligence onto all NFT or SFT collections contained within The Root Network.
As with most things, this approach does come with a minor trade-off. We can no longer guarantee that a collection with NFI enabled has the required metadata generated for every token within the collection. A small price to pay sure, but this does mean that any services using the new pallet will need to check for metadata existence and handle the case where the metadata doesn’t exist.
Designing for the Future
So what now? We’ve provided the tools for tokens to become intelligent by attaching an evolving AI brain to them. We’ve created a new pallet where we keep track of all the intelligent metadata, and we’ve enlisted the help of a relayer to perform some heavy off-chain tasks. But how can we ensure this NFI Pallet stands the test of time? Currently the scope of this pallet encapsulates only one set of intelligent metadata for the core use case of NFI enabled agents, but down the line there may be room for more sets of metadata, equally important but separate by nature. We need to ensure our pallet has support for these future use cases to prevent any massive refactors and storage migrations in the future.
To accommodate for this potential use case, we abstracted the NFI metadata into the NFI category, with room for any other additional data that may need to be attached in a similar way in the future. This data categorisation is what we call the NFISubType
.
pub enum NFISubType {
NFI,
FuturePlans,
}
pub enum NFIDataType {
NFI(NFIMatrix),
Future(FutureData),
}
As you can see by the above snippet, we separate out the NFI matrix data into the arm of an enum, that way we can easily categorise the data as NFI, and in the future have the freedom to add more types of intelligent metadata simply by adding extra enum variants. Although it may appear to be an over complication at this point in time where we only have one variant, I’m hoping future Jason will thank me for this one.
What’s next for Pallet NFI?
The NFI Pallet is designed to be used as a core identity component of the Think Agent Standard, known as the Soul. As an evolution of the ASM protocol, Think endeavours to create an open protocol that can keep up with the emergent needs of a quickly evolving agentic internet. The previous section talked about planning for the unknown, but are there any future improvements that we know a bit more about? This work is designed to be the foundation for intelligent support on The Root Network, and although I’m not about to drop a fat chunk of alpha at the end of this dev diary, I can speak a little on what our plans are. I’ve already mentioned that the relayer can be improved upon in its design so eventually that system will be fleshed out and made more decentralized to satisfy the philosophy of what we’re building. The dev experience for people wishing to utilize this technology is also very early days so a lot of our future efforts will be on streamlining this process. These efforts could involve creating an NFI SDK for users to mint tokens with NFI data on The Root Network and potentially any other chain.
But for now our brand new NFI Pallet is serving its purpose nicely and our chain is leading the charge with decentralized, AI compatible Non Fungible Tokens.
Learn about more features and custom pallets The Root Network has to offer here. To stay up-to-date with developments and join our growing community, follow us on X and join our Discord.