FUZZY is an on-chain art experiment about neural reconstruction, not perfect copying. A tiny image model was trained on iconic NFTs and uploaded into Solana bytecode. Every metadata read pulls the model's render live, then counts the pixels where they disagree.
The render is the artwork — a tiny neural reconstruction of the underlying source NFT. The original and the fuzzy field are shown alongside so you can see the source and the pixels where the model disagreed.
Not a transformer. Per pixel, the model picks one of 18 candidate colors from the exact 222-color palette. A learned token embedding multiplies into a per-pixel head; the argmax selects the slot; that slot indexes into a precomputed candidate table.
The oracle runs the model off-chain whenever metadata is requested, then signs the result. The Solana program verifies the signature and stores the latest fuzzy score on-chain. The bigger the disagreement with the original, the higher your fuzzy.
Two FUZZYes of the same merge level fuse into one. The survivor keeps its original NFT anchor (the fuzzy is still computed against that). The donor's embedding gets averaged into the survivor's; the model renders from that blended embedding from then on. The donor token is burned.