Migrating Your Design System to Jetpack Compose Part 4: Stakeholder Retro

Featured in jetc.dev.

What, praytell, has caused me, nearly two years after I said I would, to finish this blog series? Could the single day of sunlight we had yesterday have triggered something in my Canadian winter-addled brain? Perhaps.

A second theory: I’ve done a lot of interviewing recently, and I’ve had to talk a lot about projects - what went well, what went not so well, and talk a lot about some of my strengths and failings as a technical lead. And I spoke about this project a lot.

So with this topic fresh on my mind, I’m going to finish this series, damnit, and reflect a bit on the huge risk my team took a couple of years back. For those who haven’t read the other pieces; that’s okay, this is going to be farely stand-alone anyway.

Originally I said that I would cover accessibility, resources and a few other bits. Frankly, I am not the best person to cover these topics these days. I apologise if you were waiting for this. tl;dr with resources - nothing is different, but you know that now. At the time, it wasn’t super clear to a lot of people.

Communication

One thing I’ve learnt the hard way over the years is that communication is that most important skill you can possibly leverage as an engineer. And as we embarked on this project, I’d had criticism - plenty of times! - that I wasn’t keeping stakeholders up-to-date.

For this project, we would be converting a lot of an old thing (Views) into a new thing (Compose), and if we did it correctly, everybody would be none-the-wiser. And that presents a problem. For stakeholders, they wouldn’t see any visible progress at all, so there was no way for them to easily see just how much value we were adding to the codebase by taking this big technical risk. And for what it’s worth, I think we added a tonne of value. But how were the people above us supposed to know?

To remedy this, the solution was two-fold: collect metrics and present every win.

Measure Everything

I can’t recommend logging internal, developer-facing metrics enough. There are three main benefits to this.

Firstly, it’s a great motivator for your team. Refactors are a slog, and after those initial wins it can feel like you’re walking through molasses trying to eke out and untangle every legacy part of your codebase. Being able to point to a graph and say “look folks, we’re halfway there!” was a huge motivator, and really helped push some of the team to finish parts that would otherwise drag on for months.

Secondly, it helped us know for sure when we could shutter certain things. For instance - knowing that we had completely removed $component_x from the codebase meant that we could start the next migration, or remove some tooling that was no-longer needed, or whatever. Having a clear dashboard where we could see that some phase was complete took any guesswork out and took pressure off individuals who might otherwise be checking the status manually and out of personal curiosity. If the entire team can see that phase n was complete, the entire team knew they could crack on with phase n + 1.

Lastly and most importantly in the context of this article: it gave us wins to highlight. On those weeks when it didn’t look like we achieved anything, it gave us the ability to look at how far we’d come and what needles we had actually moved in the last month, and that was a really powerful tool for keeping stakeholders informed.

So before you do any refactor, think about why you’re doing it - presumably there’s some line you’re wanting to move, otherwise you’re not really adding value. And measure it. Make sure that the work you’re doing is having the effect you hoped for. Be clear with the powers that be that the work you’re doing is going to move this line, and proudly show off when it does.

Present The Wins

With those wins in mind, we promised to deliver a presentation, every Friday, to the entire company. Starting out, we talked at length about why we decided to move to Compose, and the wins that we were expecting to get with it. We explained in broad terms why we thought that investing time upfront would pay off by speeding up shipping and reducing bugs - showing some slides explaining how a RecyclerView works vs a LazyColumn makes the win extremely obvious, even for those that don’t understand code at all.

After that, every week, we showed off each new widget that we had converted from a View to a Composable. Every time we built a new screen (we were in the middle of a large feature), we showed that off too, and talked about how Compose had helped us build it quickly. I remember one such feature where we needed to outline each View that had a field that hadn’t been completed - in Compose, this was a simple Modifier, even ontop of Views. This blew my mind; conveying this to the company helped win skeptics over.

Regularly hammering home how quickly we completed some feature and comparing the LoC between the legacy implementation and the new one was a fun exercise, and helped drill into the audience why we were doing this.

Some weeks of course, we wouldn’t have anything visible to show for our work. And this was where we utilised our metrics. “Look, line go up/down!” was a surprisingly powerful way to show progress that would otherwise be hidden, and once we started doing this we never got awkward questions about what we achieved this week.

Of course, you generally need to link the metric to something understandable. Number of modules doesn’t mean much, but if you re-iterate that this is a broad measure of code organisation and inverse to build times, and time is money, people will care.

What Did We Measure?

  • The number of pages using Compose vs legacy
  • The number of View components
  • The number of public Composables in our :design modules
  • The number of legacy XML styles

Not strictly related to Compose but relevant to other refactors, we also measured:

  • Crash rates
  • The LoC in our monolithic :app module
  • The number of modules
  • The number of unit tests
  • The percentage of Kotlin vs Java
  • I made an inconsistent attempt at measuring build times

Pretty much all of these - besides Kotlin vs Java which used cloc - used pretty rough scripts written in bash, utilising grep or pcregrep. For a rough example, counting the instances of setContent, which was a good analogue for the number of pages using Compose:

# find all of the .kt files not in build or design folders
find . -not -path "*/design/*" -not -path "*/build/*" -type f -iname '*.kt' |
# match with regex
xargs pcregrep -Mr 'setContent {(\s*)' |
# count lines found
wc -l

Whilst our CI was elsewhere, GitHub Actions was perfect for executing these scripts on each merge to main, and I wrote a bunch of small workflows that just ran these scripts, dumping the resulting numbers into a Google Sheet. If you have a Data Science team, I highly recommed you partner with them and get them to build you a dashboard that anyone can check. For our needs, Sheets was fine.

Communication Round-up

Look, the point I’m getting at here is publicly presenting your wins is an easy way to get everyone onboard with a refactor, and this obviously applies to topics outside of Compose. I’m sure everyone in the company was tired about me discussing Jetpack Compose every week, but they got it, and they understood why we took the risk on it. And I had a blast talking about something I was passionate about - and my fantastic team - every week.

If you can do this - present your team’s wins every week, and explain in clear terms what this means for the business - you’ll do wonders for your career and theirs.

Reflecting

So overall, how did this project go?

It was perhaps the single most rewarding refactoring project I’ve ever embarked on. We had endless fun discussions about the best way to approach problems - at this time we had no other work to reference! - and the team stayed motivated to keep up the momentum the whole way through.

It was a big risk though. Without the support that we had, from the community and from Google themselves, it likely would have been too risky at the time. We didn’t know that we would get the support we did, but I had an inkling, and running small experiments and scaling them once we had confidence was the correct way to mitigate the risk.

By the time I left my role, we’d been running Compose in production for 9 months or so, and we’d had zero UI-related crashes. I scarcely believed it. We also ended up catching up to our iOS counterparts when we’d at one point been nearly a year behind. And I attribute some of that to savvy requirement massaging, but a lot of it to Compose.

Conclusion

Thanks for reading, and for making the other 3 parts of this series what they were. It’s been immensely gratifying to know that a lot of people got value of of this writing, and I’m grateful for everyone who referenced them.

comments powered by Disqus