📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
OpenAI Open Source PaperBench, Redefining Top AI Agent Evaluation
Jin10 data reported on April 3rd, at 1 AM today, OpenAI released a brand new AI Agent evaluation benchmark - PaperBench. This benchmark primarily assesses agents' capabilities in search, integration, and execution, requiring the reproduction of top papers from the 2024 International Conference on Machine Learning, including understanding of the paper content, code writing, and experimental execution. According to the test data released by OpenAI, currently, well-known large models' built agents are still unable to outperform top machine learning PhDs. However, they are very helpful in assisting learning and understanding research content.