Pro Tips
GibsonAI Hackathon Results: What We Learned About One-Shot Development
May 29, 2025
Our first-ever GibsonAI hackathon challenged developers to build complete applications using AI and our database platform in a single prompt. The results taught us a lot about what works (and what doesn't) in AI coding.
Thanks to everyone who participated. You didn't just build apps, you helped us understand how GibsonAI works in practice and what developers actually need from a database service like GibsonAI.
The Challenge
Build a full-stack application using an AI-enabled IDE, with GibsonAI handling your database through our new MCP server. One prompt, no follow-ups, no gradual iteration. We judged submissions on functionality, creativity, use of GibsonAI's features, and whether they actually worked as one-shot (or minimal-shot) builds.
Notable Submissions
PromptCraft - The Near Miss
This entrant built a platform for sharing AI coding prompts. Good concept, solid execution in most areas, but it consistently failed at dark mode implementation and was complicated by integrations. Multiple attempts produced designs that looked broken or were almost functional. Sometimes the smallest features become the biggest obstacles in one-shot development.
OnboardEase - Great Idea, Execution Issues
This submission tackled employee onboarding automation across IT, HR, and other departments. The AI built a solid backend with GibsonAI integration, but the app never actually used it in multiple attempts. We think the prompt instructions to build an "MVP" or "Demo" made the AI prioritize quick visual results over proper database connections.
Gibby Got Back - The Winner
Our eventual winner's backup utility won because it worked perfectly on the first try and used GibsonAI in a genuinely creative way. No fancy UI, no authentication complexity, just solid file backup functionality. The prompt was clean and focused, and the result did exactly what it promised. The way it chunked files for storage and retrieval in a relational database was clever.
Tariff Calculator - Great Idea, Variable Results
This submission's import cost calculator sought to coordinate multiple APIs (tariff data, OpenAI, and GibsonAI) in one shot. The AI likely got confused juggling all the connections and produced unclear responses. Good lesson in keeping scope manageable for one-shot builds and not relying on too many external APIs. Great app after a couple of follow-up prompts!
SpriteGenerator - So Close
This contestant’s game sprite generator worked beautifully except for the final step, actually generating and saving the image sequence through the OpenAI API. About 90% of the functionality worked perfectly and multiple attempts to replicate the project got extremely close. Our judges also reported that they loved the clever use of ChatGPT’s image capabilities.
ReceiptSplit - Feature Overload
This contestant built an expense-sharing app that was almost too complete. It included features that necessitated a more complex backend such as bill splitting, user management, trip creation, receipt percentages, and more. The AI consistently built impressive partial apps but would drop different features each time. Too much scope for a single prompt, but follow up prompts again created an impressive app.
StorFlo - Surprisingly Solid
This kanban board prompt worked consistently across multiple attempts. The submitter made some smart choices: avoided auth complexity, focused on a well-understood problem (task management with kanban), and explicitly told the AI to complete everything in one shot. In fact the prompt reiterated this multiple times!
LocalStoryVault - The Marathon
Honorable mention goes to this location-based storytelling platform that used the Taskmaster MCP server to automate additional prompts (technically more prompts, but who’s counting). Eight hours later, it was still building and improving. Showed what's possible when AI has persistence and an elaborate plan.
What We Learned
Don't mess with folder structure. AI has opinions about project organization. Fighting them makes everything harder. Let Claude cook!
Be specific about outcomes, not methods. Tell the AI what you want, not how to build it. "Build user authentication" works better than "Use Clerk with OAuth and JWT tokens." It may implement it the way you want but it will forget and error out later.
One API at a time. Each additional service increases complexity. Keep external dependencies minimal to avoid potential points of failure.
Use stable versions. Don't micromanage, but specify stability over bleeding edge. Also, specifying stable versions helps avoid letting the AI use outdated versions from it’s training data cutoff.
Keep basics simple. Auth, navigation, and standard features should be straightforward. The AI has strong preferences here and being too prescriptive will break things.
Repeat important instructions. In one-shot development, key points need emphasis. Say "complete the entire app" multiple times. LLMs are conversational by nature, you must disabuse them of this tendency.
Claude works best. Other models either check in too often or randomly modify working code. Claude seemed to be the best at building from 0-1 unsupervised.
Make rules concise. Every rule costs tokens and goes out with every API call.
Give design guidance. Visual styling is where you can give detailed or vague instructions. The AI won't argue with specific design requirements, but if left to its own devices it might give you unexpected outcomes.
Awards
Feedback Award: Doug (ReceiptSplit) - Provided valuable feedback on our MCP server, API design, and suggested TypeScript support. Appreciated GibsonAI's opinionated approach and pointed out some edge case flaws in our API implementation.
OneShot Award: Spencer Francisco (StorFlo) - Most complete one-shot application. Benefited from focusing on a familiar problem, avoiding auth, and repeatedly instructing the AI to never use mock data.
Grand Prize: Mario Zigliotto (Gibby Got Back) - Not the flashiest app, but the most elegant solution. Great prompt, creative database usage, and flawless execution. Proved that sometimes the best interface is a command line and that sometimes the complexity needs to be in the instructions, not in the implementation.
Takeaways
This hackathon proved to us that GibsonAI works well as a database layer for rapid prototyping. Developers can focus on their core ideas instead of database setup and management. The one-shot constraint focused us on what works in AI-assisted development and what doesn't.
The next hackathon is already in planning. Check out all submissions at hackathon.gibsonai.com/submissions to see what people built!