Nicolas Holzapfel — Portfolio

AI-first design

Single-handedly designing & developing a web app with AI and code-literacy


Context

Trains to Green is a simple web app developed in my spare time to solve a niche problem for London hikers and to challenge myself to adopt new ways of working enabled by the latest LLM models.

UX goals

London is surrounded by beautiful countryside — including 2 national parks and 7 national landscapes. The dense regional rail network facilitates easy day hikes even for the nearly-half of Londoners without a car. Unfortunately, after the top-10 web articles have been exhausted, there's simply no easy way to discover hike-worthy rail stations and compare between them. This is the UX problem which Trains to Green solves.

Learning goals

In addition to making a useful app, I wanted to immerse myself in the emerging design workflows that AI makes possible:

  • How much design control/fidelity does an AI-first approach give me?
  • Can I avoid the cookie-cutter “LLM look” and produce an opinionated, distinctive UI?
  • Can I maintain a consistent design system in the coding environment?
  • etc

Tech stack

ReactReact
TailwindTailwind
shadcn/uishadcn/ui
VercelVercel
Next.jsNext.js
Google Maps APIGoogle Maps API
Flickr APIFlickr API
MapboxMapbox

The React + Tailwind + shadcn/ui combo is the default tech stack used in nearly all AI tools. AI works with it much more effectively than other stacks. In addition, and not coincidentally, the shadcn/ui component library is completely customisable, so I could go as far as I liked in giving it a distinctive brand.

Vercel and Next.js I chose simply for their strong reputations, and because the combo is known to require only minimal configuration. Google Maps Directions API was used to calculate the travel-times from central London, the Flickr API was used to pull the photographs, and Mapbox was used (unsurprisingly…) to get the map.

Tools

VS CodeVS Code
Claude CodeClaude Code
FigmaFigma
MidjourneyMidjourney
RecraftRecraft
H
Hugeicons
Mapbox StudioMapbox Studio

I set up the project manually via the terminal, opened up VS Code and then used a combination of hand-coding and the Claude Code VS Code plugin to build. Later on I switched to mainly using the Claude Code desktop app and only hand-coding occasional Tailwind CSS and copy changes.

I used Midjourney to generate the flagship artwork (see the top of this page) and Recraft to create the logo (since Midjourney can't create vectors). I used Hugeicons for icons and Mapbox Studio to customise and recolour the map.

UX thinking

Utility

The app addresses the central user need (“where should I go for a day hike by train”) by allowing users to compare rail stations by travel-time and the desirability of nearby hikes — via ratings and Flickr photos.

The app also makes it easier to plan station-to-station walks, through the “easy hike” and “epic hike” radii that surround the stations on-hover.

Escapism

My intention is that using the app should be a pleasurable experience where users can imagine themselves enjoying the peace, beauty and novelty of the countryside. I didn't want it to feel like a purely functional utility. To this end, the UI is defined by friendly rounded edges and slightly exaggerated padding and sizes, to give a comfortable, accessible feeling. I applied a palette of bright nature colours (green, aquamarine and cream) across both the UI and the map.

Beyond colour, my main customisation of the map was to radically declutter it — I've removed all roads and other labelling, both to allow the user to concentrate on the purpose of the app, and to create a peaceful and attractive view of the countryside. The visual contrast between urban and rural areas has been heightened to help users view how urban a hiking area is.

The photos in each station are a key part of inspiring the users. The photos are pulled from Flickr via location+tag-based filtering script, but I also added an admin-mode so I could manually curate the photos at highlighted stations.

Finally, I added a number of subtle micro-animations — e.g. the train on the slider animates-in on app-load, the checkbox icon gets drawn, and the station-icons grow and shrink. Nothing abruptly appears or disappears — the intention is that everything should feel organic and smooth.

Mobile

As a simple and relaxing app, I wanted it to work well on mobile devices. Examples of optimisations for this medium include:

  • Greater spacing and font-size on mobile, to accommodate the imprecision of tapping rather than clicking.
  • The stations' photos are full-bleed, so that photos can make full use of the limited available space.
  • On first load, the map is positioned such that most stations are visible despite the filter taking up most of the screen, and only top-rated stations are visible, to avoid cluttering the limited space.

UX testing & iteration

I informally tested the first version of the app on friends and family, identifying the following issues:

  • It wasn't clear that the app is specifically about train stations, not simply hiking spots. To fix this I created an introductory overlay and repeated the word “station” throughout the app.
  • I initially had a station-search feature, but realised from observation that this just created confusion about how the app is to be used, and was an unnecessary distraction from the key use case. I restricted the search function to admin-mode, where it was of use to my testing.

Future UX improvements

Aside from various micro-improvements to improve the look and feel, my priorities for improvement are:

  • A choice of specific origin points for calculating travel-time as opposed to just using a vague “central London” origin point (in fact Farringdon Station, which is roughly equidistant from all the London terminals). This improvement is constrained by the expense of calls on the Google Maps Directions API.
  • More filters and info to help with choosing destination stations, e.g. direct-train-only filters, nearby points-of-interest like stately homes and nature reserves, restaurants, terrain filters, and terrain descriptions.

Designcraft learnings

A key goal for this project was to explore AI-enabled design workflows in light of February's breakthrough in model capabilities. Here's a Q&A with myself to work through my reflections:

Can I skip Figma and design directly in the coding environment?

Pretty much! It was so easy and quick to make changes using Claude that I didn't feel the need. I popped into Figma only to compose the brand palette, make tweaks to assets, and, once, to get a clear overview of the different checkbox states side-by-side (I used the html.to.design plugin to pull the UI into Figma for this — surprisingly it did a better job of this than the Figma MCP server).

Of course Trains to Green is a UI-lite app and I was building on shadcn/ui component templates (dialogs, buttons, input fields, tooltips). With a more complex app I may have had a very different experience.

How much design control/fidelity does an AI-first approach give me?

With Claude, I never felt limited by misunderstandings and imprecision (utterly different from my experience last year with vibe-coding tools like Lovable and Replit). I had more design fidelity than in a traditional workflow since I could go directly to implementation.

Can I avoid the cookie-cutter “LLM look” and produce an opinionated, distinctive UI?

Yep, there's no excuse! AI will produce generic shadcn/ui components with only light customisation, but you can push the customisation as far as you like.

Can I maintain a consistent design system in the coding environment?

Yes! With the React + Tailwind v4 + shadcn/ui combo there's a single globals.css file that contains all the colour variables, and where you can override Tailwind defaults. Once you understand that and how it relates to UI components by way of Tailwind utility classes, you understand everything you need to know to control and maintain the fundamentals of the design system.

Can I add nice-to-have micro-animations?

Claude was surprisingly good at implementing these! Nothing like this comes out-of-the-box in shadcn/ui so I had to specifically work with Claude on each one, but it almost always came through. This was enormously gratifying for me since, working on B2B SaaS with limited resources and tight schedules, the “delight”-inducing UI finesse is always the first sacrifice.

How much code-literacy is required? How much manual-coding is required?

Much less than I imagined! It was helpful, but not necessary, that I understand HTML, CSS, Tailwind and the basics of React. Having a clear mental model of how, for example, Flexbox layouts work meant I could easily see where Claude had made mistakes and how these could be corrected (though these are in any case very similar to Figma mental models). Sometimes I would jump into the code and edit Tailwind utilities, React components and copy directly, because it was faster just to make the changes than to explain them to Claude and wait for response (especially true for rapidly trying out different layout tweaks), but this wasn't essential.

Understanding the basics of React components and being able to see how this is organised in the code base is also helpful for instructing Claude to make changes that are systematic and modular, rather than messy one-off hacks, but this requires only minimal code-literacy — anyone who has designed a CMS will already be familiar with this way of thinking.

Complex backend work and API interactions was handled entirely by Claude (obviously I still needed to actually generate the API tokens).

The one big exception is the design system. Without my being able to understand and control how the design tokens and semantic variables are implemented in the code, I think Claude would have created an inconsistent mess of hard-coded colour values. Understanding and manually editing the globals.css file meant I was able to keep control of that and rapidly overhaul the look and feel in a consistent way.

Can I just sit back and let the AI do everything?

Not yet! It was time-consuming to make this app. Left to itself Claude would constantly claim that features were built when they clearly weren't, and make terrible design decisions. In terms of UX (as opposed to code implementation), it needed to be told exactly what to do all the time. Presumably it would have done much better with an app that simply followed well-established patterns (e.g. landing pages, eCommerce, settings, bookings).


Conclusion

It was enormously liberating to be able to implement the design exactly as I wanted, without having to chase up developers about design details. It was also just extremely fun and addictive — I experienced an extreme case of “just one more turn” video game condition. I'm eager to find the opportunity to push this approach further, in the context of a wider team and a much more ambitious app.