From Slides to Software in One Evening
After two years of presenting the Smart Building Maturity Model at Cisco Live, I finally turned the PowerPoint into a working application. It took one evening.
For two years, the Smart Building Maturity Model lived in a PowerPoint deck. Andrew Lu and I presented it at Cisco Live EMEA in Amsterdam. We refined it at BICSI, the Real Estate Accelerator workshops, Nashville AI Week. The framework resonated — five stages of building intelligence, from siloed systems to AI-driven operations. Customers would nod along, take photos of the slides, ask for the PDF.
Then they'd ask the question we couldn't answer yet: "Is there an app for this?"
There wasn't. The maturity model was a conceptual framework. Useful in a conference room. Less useful when a building owner is standing in front of their BMS at 9pm trying to figure out where they actually land on the spectrum.
The Prompt, Not the Code
I didn't sit down to write an application. I sat down to write a prompt.
The CLAUDE.md file is where you define the project for Claude Code — the tech stack, the architecture decisions, the conventions. I spent a couple hours on a Tuesday evening getting that right. Next.js with the App Router. Supabase for auth and data. OpenRouter for the LLM layer. A scoring engine that's deterministic at its core but uses AI to generate contextual questions. White-label theming from day one because this needs to work for Cisco, for partners, for customers running their own assessments.
The scaffolding took shape as a conversation. Not writing code line by line, but describing what the platform needed to do and letting Claude Code build it. An organization and portfolio hierarchy. Role-based access for Cisco admins, partners, and end customers. An adaptive question engine that targets questions at the right maturity level based on previous answers. Domain scoring with confidence tracking.
I pushed the initial scaffold, reviewed what was there, and went to bed.
The Morning After
I woke up to a working prototype. Not a mockup. Not a wireframe. A running application with authentication, portfolio management, building assessments, and an AI-powered question engine that generates contextual questions based on building type and current maturity signals.
The question engine is the part that surprised me. It doesn't just pull from a static bank. It looks at your current estimated score, your confidence level, what you've already answered, and generates the next question targeted at the maturity range where it needs more signal. Answer something that suggests Stage 3 capabilities? The next question probes Stage 4. Low confidence in a domain? Keep asking. High confidence? Move on.
Over the next couple sessions I added the features that turn a prototype into something presentable: PDF and PPTX report export so assessors can hand a client a polished deliverable. An AI recommendations engine that analyzes domain gaps and generates prioritized improvement plans. An analytics dashboard. OAuth SSO.
See It Working
Here's a UAT walkthrough of the platform as it stands today:
Watch the full assessment walkthrough on Vidcast
The full assessment flow — login, create a portfolio and building, start an assessment, answer AI-generated questions, review scores and gap analysis, generate recommendations, export a report.
Why This Matters Beyond This Project
The Smart Building Maturity Model needed to be software from the beginning. A framework that lives in slides is a framework that gets used once, at a conference, and then filed away. A framework that lives in an application gets used repeatedly, generates data, and improves over time. Every assessment adds to the question bank. Every scored domain refines the model's understanding of what "Stage 3 HVAC" actually looks like across different building types.
But the bigger realization is about the economics of building tools like this. Two years ago, turning a consulting framework into a full-stack application meant hiring a team, writing specs, waiting months. The calculus was always "is the framework valuable enough to justify a six-figure development effort?" Usually the answer was no, and the slides stayed as slides.
That calculus has changed. A well-written prompt file and a couple of evening sessions produced a working platform with adaptive AI questioning, scoring, export, analytics, and multi-provider auth. Not a toy. Not a demo. A deployable application with row-level security, proper role hierarchies, and production-grade infrastructure.
Every consulting framework, every workshop methodology, every assessment rubric that's been trapped in PowerPoint... the cost of turning it into software just dropped by an order of magnitude. The bottleneck isn't engineering capacity anymore. It's knowing what to build and being able to describe it precisely.
Two years of conference presentations gave me that precision. One evening with Claude Code turned it into software.