When we started prototyping an AI assistant inside SMOVECITY in late 2024, the obvious first version was a route planner. Tell it where you want to go, it figures out the best multi-modal route. We built that. We shipped it internally. It was a half-product.
This is the story of why we threw out a route-only AI and built one that plans the whole trip instead.
The route-only version was technically impressive
It worked. You could say "how do I get from here to the Eiffel Tower" and it would propose three multi-modal routes ranked by time, price and carbon. The reasoning was genuinely good — it knew about the live tram, the bike-share dock fill rates, the walking distance to the nearest scooter. It even handled disruptions ("the metro is closed — take the 51 tram three stops east, then a bike").
And nobody who tested it kept using it. After a week, internally, nobody opened the AI tab unless we asked them to.
The conversation we kept having
Every time we'd watch someone use the prototype, the same pattern emerged. They'd ask the AI to plan a route. The AI would plan it. Then they'd close the AI tab, open the trip-planning surface, type the destination in there too, and book the flight to actually get to the city the route was inside of.
The AI was answering the wrong question. It was answering "how do I get from A to B in this city" when the real question was "I'm thinking about going somewhere — figure out the whole thing."
A route is a fragment. A trip is a unit.
The trip is the unit
So we widened the surface. The v2 AI doesn't reason about the next ride. It reasons about the trip — which can be a one-way commute, or a weekend away, or a two-week holiday with five cities on it. Same conversation, same response shape, same booking surface underneath.
The result-card pattern (flights → hotels → things to do → restaurants) maps directly onto a trip. When the AI returns a 4-card response, that response is a saveable trip. One tap saves the whole stack to your trips tab. No re-entry.
What this changed about the model
Three things, technically:
- The system prompt teaches the model to recognise trip intent vs. ride intent. A bare destination is a ride. A destination plus any rough time window ("next weekend", "tomorrow", "13 May") is a trip.
- Tools were re-scoped.
find_flightsandfind_hotelslive next toplan_routeandfind_vehicle. The model picks the right combination based on the intent it inferred. - The conversation length budget was raised. A route-only AI can resolve in 1–2 turns. A trip-planning AI usually needs 3–5 turns of back-and-forth (dates, budget, preferences). The model's working memory had to grow with the surface.
The accidental side effect
When you reason about the whole trip, you also surface the parts of the application most people would have missed. People don't open the events feed unless they have a reason to. But when the AI plans a Lisbon weekend, it pulls "what's on tonight" into the response card alongside the hotel. Engagement with the events surface tripled in the first month after we shipped this.
I don't think AI is a feature. It's a way to make the rest of the application reachable from one place.