SolidityBench by IQ has launched as the primary leaderboard to judge LLMs in Solidity code era. Accessible on Hugging Face, it introduces two progressive benchmarks, NaïveJudge and HumanEval for Solidity, designed to evaluate and rank the proficiency of AI fashions in producing good contract code.
Developed by IQ’s BrainDAO as a part of its forthcoming IQ Code suite, SolidityBench serves to refine their very own EVMind LLMs and examine them in opposition to generalist and community-created fashions. IQ Code goals to supply AI fashions tailor-made for producing and auditing good contract code, addressing the rising want for safe and environment friendly blockchain functions.
As IQ informed CryptoSlate, NaïveJudge provides a novel method by tasking LLMs with implementing good contracts primarily based on detailed specs derived from audited OpenZeppelin contracts. These contracts present a gold normal for correctness and effectivity. The generated code is evaluated in opposition to a reference implementation utilizing standards corresponding to practical completeness, adherence to Solidity greatest practices and safety requirements, and optimization effectivity.
The analysis course of leverages superior LLMs, together with completely different variations of OpenAI’s GPT-4 and Claude 3.5 Sonnet as neutral code reviewers. They assess the code primarily based on rigorous standards, together with implementing all key functionalities, dealing with edge circumstances, error administration, correct syntax utilization, and total code construction and maintainability.
Optimization concerns corresponding to gasoline effectivity and storage administration are additionally evaluated. Scores vary from 0 to 100, offering a complete evaluation throughout performance, safety, and effectivity, mirroring the complexities {of professional} good contract improvement.
Which AI fashions are greatest for solidity good contract improvement?
Benchmarking outcomes confirmed that OpenAI’s GPT-4o mannequin achieved the best total rating of 80.05, with a NaïveJudge rating of 72.18 and HumanEval for Solidity go charges of 80% at go@1 and 92% at go@3.
Curiously, newer reasoning fashions like OpenAI’s o1-preview and o1-mini have been crushed to the highest spot, scoring 77.61 and 75.08, respectively. Fashions from Anthropic and XAI, together with Claude 3.5 Sonnet and grok-2, demonstrated aggressive efficiency with total scores hovering round 74. Nvidia’s Llama-3.1-Nemotron-70B scored lowest within the high 10 at 52.54.
Per IQ, HumanEval for Solidity adapts OpenAI’s unique HumanEval benchmark from Python to Solidity, encompassing 25 duties of various problem. Every activity contains corresponding exams suitable with Hardhat, a well-liked Ethereum improvement atmosphere, facilitating correct compilation and testing of generated code. The analysis metrics, go@1 and go@3, measure the mannequin’s success on preliminary makes an attempt and over a number of tries, providing insights into each precision and problem-solving capabilities.
Targets of using AI fashions in good contract improvement
By introducing these benchmarks, SolidityBench seeks to advance AI-assisted good contract improvement. It encourages the creation of extra refined and dependable AI fashions whereas offering builders and researchers with worthwhile insights into AI’s present capabilities and limitations in Solidity improvement.
The benchmarking toolkit goals to advance IQ Code’s EVMind LLMs and in addition units new requirements for AI-assisted good contract improvement throughout the blockchain ecosystem. The initiative hopes to handle a vital want within the trade, the place the demand for safe and environment friendly good contracts continues to develop.
Builders, researchers, and AI fanatics are invited to discover and contribute to SolidityBench, which goals to drive the continual refinement of AI fashions, promote greatest practices, and advance decentralized functions.
Go to the SolidityBench leaderboard on Hugging Face to study extra and start benchmarking Solidity era fashions.