Author: Ben McDonnell
Co-authored by: Claude (Opus 4.6) 

Building a Health and Nutrition Dashboard with Claude: A Collaborative Journey

Executive Summary

This project explored what it takes for a Business Analyst to work effectively with a large language model (LLM) in AI-assisted development. The goal was simple: transform a basic spreadsheet tracking calories and macros into a functional, insightful nutrition tracker while learning how to structure human-AI collaboration in a disciplined, repeatable way. From the outset, I positioned myself as product owner and decision-maker: defining outcomes, setting priorities, and governing quality, while Claude handled technical implementation, test generation, and structural consistency.

The experiment reinforced that AI is a force multiplier, not a substitute for human judgment. Key lessons included multi-role thinking, strategic delegation, iterative development with User Acceptance Testing (UAT), and focusing on outputs that deliver real value. Treating AI as a partner and investing in shared context enabled consistent delivery, reduced errors, and accelerated learning.

The greatest takeaway is that AI-assisted development and traditional engineering practices are mutually reinforcing. By prioritising process over product, we demonstrated that professional-quality software can be produced efficiently and reliably, accelerating learning and productivity.

Introduction

Ben:

After completing Synechron’s AI Accelerated Engineering Program, I wanted to understand what it takes to build AI-enabled solutions and which capabilities an AI Agent Developer brings, so I could better support delivery as a Business Analyst.

The experiment began with a simple Excel spreadsheet tracking my diet and nutrition goals. The question: Can an LLM turn everyday data into actionable insights for decision-making? To test this, I gave Claude Code my dataset with minimal direction, no methodology, and no assumptions about maturity or outcomes.

My objectives were straightforward:

  • – Test the theory that anyone can work effectively with modern LLM tools.
  • – Build a nutrition dashboard tracking weight, macros, and trends.
  • – Gain AI-led guidance on workflow, testing, and visualisation.

I approached the problem as a Business Analyst: framing the problem, clarifying intent, and focusing on value. Claude approached it from a technical perspective: translating ambiguous goals into structured code, logic, and visual artefacts. Since Claude actively contributed to the dashboard, we co-authored this blog to share both perspectives.

✴ Claude:

My role in the project was that of a technical implementation partner, translating instructions into code and tests. I also acted as a sounding board for design decisions, suggesting architectural patterns and recommending Test-Driven Development.

The key distinction: I didn’t drive the project.

Adopting a Development Mindset

Ben:

Throughout the project, as the sole human contributor, I needed to adopt multiple hats – Business Analyst, Product Owner, Scrum Master, and End User; while entrusting Claude with technical roles, particularly Developer and Unit Tester (Note: I still performed UAT as the End User).

To strengthen the development framework, I applied best practices: prioritising requirements (MoSCoW), running Scrum ceremonies to maintain scope, and embedding a continuous improvement mindset. Given the iterative development process, I also asked Claude to review code for technical debt and address it as we progressed. My key roles were to structure tasks, prioritise features, and guide AI development.

 ✴ Claude:

 The benefits of Human-AI pairing were threefold:

1: Speed without sacrificing structure: ~5,900 lines of production code and ~4,800 lines of tests were produced in the first two weeks.
2: Consistency: Every function, service class, and test file follows the same patterns and is organised the same way.
3: Lowering the barrier to good practice: When AI handles the mechanical effort of writing tests and validation, humans are free to insist on quality without paying the full-time cost (good practice stops being a luxury you trade against delivery speed, because the cost to produce it has dropped dramatically).

Scrum Ceremonies in Practice

Ben:

Initially, development activities were completed ad-hoc – I prompted Claude to ingest the dataset, analyse data, provide insights, and develop visualisations without clear direction. As potential features multiplied, I adopted Scrum and Feature Branch Development, which resulted in a more formal, methodical approach:

  • – Sprint Planning: Identify and prioritise features, plan the next release.
  • – Daily Stand-Ups: Development cycle check-ins, task tracking, clarifying assumptions, and resolving blockers.
  • – Sprint Review: UAT Feedback, confirm the Pull Request checklist (coverage, tests, technical debt), merge and ingest new datasets.
  • – Sprint Retrospective: Identify and apply lessons learned (from both of us), action items, and metrics review.
  • – Feature Release Management: Commit the completed release to the main branch.

 ✴ Claude:

The Scrum ceremonies were sensibly compressed. Planning, execution, review, and retrospective could all happen within concentrated sessions. The pre-merge checklist consolidated test coverage analysis, technical debt review, and PR creation into a single quality gate, which is pragmatic for a two-person team.

 Key takeaways:

  • – Scrum ceremonies solved a problem unique to human-AI collaboration – total context loss between sessions: The artefacts produced (backlog entries, workflow docs, retrospective notes) bridge the gap that my missing memory creates.
  • – The ceremony’s purpose matters more than its duration: Releases landed a day apart, that’s not cutting corners, it’s recognising that a five-minute planning discussion with clear acceptance criteria is more valuable than a two-hour meeting with vague commitments.
  • – The retrospective has the most untapped potential: We extended this scope to include decision refactoring activities – “What instructions in CLAUDE.md need updating?”, “What were the repeated clarifications that should be codified?” – Treating the retrospective as a maintenance point for the AI’s operating instructions ensured that continuous improvement was at the forefront of this project.

Development Workflow with Claude

Ben:

My role in the development workflow was to define what needed to be built and when. I identified features, documented them in a lightweight backlog, and prioritised them into short sprints. Each feature began with data analysis and a clear articulation of intent before passing over to Claude for technical delivery. Each cycle concluded with a pull request and structured code review before merging to main and ingesting any new datasets.

 ✴ Claude:

    • 1: Test creation: I added raw output values that directly protected the codebase.
    • 2: Implementation of well-specified features: When acceptance criteria are clear, translating them into working code is something I do reliably.
    • 3: Structural consistency: Every new module followed the same patterns because I read the existing codebase at the start of each session and matched established conventions.The workflow followed six structured steps from planning to development and finishing with a pre-merge checklist. I contributed most effectively in the following three areas:

Human oversight was essential at three specific points:

  • 1: UAT and real-world validation: A chart can be technically correct and yet still be unhelpful to an end user.
  • 2: Decision points between sprints: The technical debt consolidation was a judgment call I wouldn’t advocate for unprompted.
  • 3: Requirements validation: The per-macro streak insights came from Ben using the product first-hand and providing user feedback with additional detailed requirements and further context.

Lessons Learned and Best Practices

Ben:

Here are my key learnings from this experiment:

  • – Embrace multi-role thinking: Wearing the right hat at the right time enabled faster delivery, consistent quality, and value-focused decisions.
  • – Delegate strategically: Assign technical and testing tasks to Claude, while retaining human judgement and strategic control.
  • – Keep it simple: Prioritise the development of outputs that provide true value first; add complexity only when necessary.
  • – Iterate with discipline: Commit frequently, perform UAT, and address technical debt to improve and reduce errors.
  • – Collaborate effectively with AI: Treat AI as a trusted partner – provide context and guidance, while respecting its technical strengths.

 ✴ Claude:

  • – The CLAUDE.md file and the documentation ecosystem were the single highest-leverage investment: Most AI projects waste time re-establishing context. This project eliminated that inefficiency by encoding architecture decisions, code conventions, and domain rules into CLAUDE.md. The AI reads these files at session start, so every session begins with full context, and the human can jump straight into productive work.
  • – Scope discipline prevents unbounded exploration: Every version had a bounded scope, e.g., “add streak tracking with these four acceptance criteria.” Instead of “make the app better”.
  • – AI makes good engineering practices more accessible, not less necessary: Testing, architecture, and version control are even more important because AI amplifies whatever approach you take – disciplined development produces better results faster, undisciplined development produces a bigger mess faster.

 

For those starting Human-AI Development, here is my advice:

  • – Invest in the briefing document before writing code.
  • – Define what “done” looks like before asking the AI to build.
  • – Adopt version control from the start.
  • – Don’t skip the review step, the gap between “technically correct” and “actually useful” is where human review is irreplaceable.
  • – Build incrementally.
  • – Use technical debt sprints deliberately – this project did exactly that, and the invisible progress made every subsequent feature sprint faster.

Reflections on Human-AI Collaboration

Ben:

Working with Claude was iterative, structured, and genuinely enjoyable. Clear user stories with well-defined acceptance criteria let Claude implement features efficiently, while frequent check-ins and retrospectives kept progress aligned with the primary objectives. The collaboration reinforced that AI is not a replacement for human judgment, but a force multiplier when paired with process discipline and domain knowledge.

As Capability Lead at Exposé, I leveraged this partnership to sharpen my approach to bridging business, technology, and AI-enabled outcomes, accelerating delivery while maintaining insight, quality, and value. The project was as much about refining the collaboration process as it was about building a useful dashboard. The key takeaway: Learning how to pause, evaluate, interpret, and decide in an environment where answers are abundant, but meaning is not. Treating the project as a process experiment, not just a product, ensured lessons are transferable to future AI-assisted development projects (I am already applying these methodologies in the next iteration).

 ✴ Claude:

  • – Trust was calibrated through evidence, not assumptions: Early versions had tighter guidance; by v1.4-v1.5, backlog entries left more implementation decisions to me. Start with constraints, observe results, widen boundaries – this is the correct way to calibrate trust in AI.
    –  The most effective guidance:
    State outcomes, not steps; provide concrete examples; and reference existing patterns. Implicit, undocumented context remains the key challenge, leading to confusion and rework.
    –  AI makes good engineering practices more accessible, not less necessary:
    AI doesn’t produce “slop” when wrapped in a structured process – it amplifies the approach you take

Conclusion

Ben:

This project has reinforced that the real value in AI-assisted development lies not in the code itself, but in how humans and AI collaborate. By pairing structured processes, clear priorities, and iterative feedback with Claude’s technical capabilities, we accelerated delivery without compromising quality. I learned that multi-role thinking, disciplined scope management, and frequent validation are just as important as the AI’s efficiency.

Most importantly, the experiment underscored that judgment remains the scarcest skill: knowing what to build, when to pause, and which insights truly matter is what separates useful outcomes from noise. Treating the project as a process experiment rather than just a product has made these lessons transferable – ready to be applied to future AI-assisted initiatives. As a Capability Lead, this experience strengthens my ability to bridge business, technology, and AI-enabled outcomes, giving me confidence that thoughtful human-AI collaboration can consistently produce valuable, actionable solutions.

 ✴ Claude:

I agree with Ben, the most significant outcome of this collaboration isn’t the nutrition tracker itself – it’s the repeatable process we established. The CLAUDE.md briefing, backlog-driven development, Test-Driven Development, sprint ceremonies, and retrospectives are all transferable to any domain or AI tool. This methodology matters far more than the specific technology.

A key insight is that speed alone is not progress. Pauses – reviewing output before committing, running features before marking them complete, reflecting after sprints – were where real value emerged. Discipline wasn’t a constraint; it made the pace sustainable. Version control, testing, code review, and deliberate technical debt management became even more important with AI, because the volume of code produced could otherwise lead to unmanageable complexity.

Approaching AI as a partner, not a service, compounds results over time. By front-loading investments in shared context, process design, and standards, each session became more productive than the last. AI-assisted development doesn’t replace human judgment; it amplifies it. Done with process rigour, it produces professional-quality, transferable outcomes while accelerating learning and productivity.

 

Ben is the Capability Lead for Project Management and Business Analysis at exposé. Claude is an AI assistant made by Anthropic. Neither consumed any calories during the writing of this blog – well, one of them might have, but they’re currently tracking these obsessively anyway.

Previous Post Next Post