Define, design & build features

Build

We've reached the point where a chunk of work is ready to build. Before we dive in it's important to understand our view on tech debt and quality.

Considerations for tech debt

To most, tech debt is limited to the result of prioritizing speedy delivery over perfect code. To us it's anything engineering related that affects the team's ability to deliver customer value.

Our definition of tech debt shapes how we deal with it. We have a more broad view of tech debt than most. We also have a specific approach to fixing tech debt.

If tech debt exists in the code we are actively developing, it will be fixed as part of development. Thus, at Joyous, fixing tech debt actually becomes part of feature development.

There are many cases that we don't consider tech debt in this context. Here are two examples. The first is legacy code that meets near and medium term scale expectations. The second is code that follows old patterns. Particularly if there are no extensions on the near term roadmap for that part of the code.

Although, if it's near code that we are about to extend, then it may be tech debt. Why? Because it could pollute future patterns and standards.

The examples below help clarify what tech debt means to us.

  • Code that is hard to build new features on top of.
  • Code that won't scale to the features or user growth we are expecting in the next one to two years.
  • Processes that slow down progress. For example, manual deployments, and a slow development environment.
  • Lack of test coverage causing bugs to go unnoticed while developing.
  • Complexity that we can mitigate in engineering. Including architecture or coding practices.
  • Anything that extends the onboarding period of new engineers.
  • Anything that increases the cost of future feature development.
  • Engineers with cargo cult mentalities tied to technological ideology over results.

We have a few conditions that we aim to meet for new features:

  1. All features that we build must be scalable and extendable for the work coming up in future.
  2. No feature we build should increase our time to develop the next features.
Much like a bank loan, there may be times when taking on some debt makes sense at the time. That's okay as long as you are comfortable with the cost.

Our approach to quality

We share ownership of quality across product and engineering. We don't have a team of QA analysts, instead we have just one. A senior analyst who helps champion quality across our entire team.

Different stakeholders incline towards some types of quality more than others. But we all view quality as a team effort, and we care about building a high quality product. We are happy to do anything to support each other in achieving this result.

The various types of quality that we focus on:

  1. Functionality. It does what we expect, as defined by the success criteria.
  2. User Interface. The visual design matches the designs created in Figma.
  3. Accessibility. It conforms to Level AA of the Web Content Accessibility Guidelines (WCAG) 2.
  4. Usability. Users know it's there, how to use it, and it's easy to use.
  5. Performance. It provides a good experience for users. Regardless of device, browser or network speed.
  6. Code. It conforms to current patterns, is simple and maintainable.
  7. Regression. It doesn’t break existing functionality.

The stages of building

Once a crew begins work on a chunk, they will start to have a daily stand up. The product person, quality person also attend. And our stand ups are open to anyone. It is common for our Head of Engineering to attend all stand ups.

When we begin, the crew starts with the tickets labelled 'investigate'. As investigations complete the crew and product person will get together again. They will decide how to proceed given the constraints, and what a good solution looks like. This is often done as part of stand-ups.

The product person or crew will then create a series of new tickets to reflect the decision.

The front of the GitHub board includes two columns for Slices. The first column is a holding area for slices that have not started. When the first ticket for a slice is picked up, then the related slice ticket will be shifted to Slices in progress.

First part of the GitHub board

joymap8.png

Figure 26 - The other build columns on our boards

There are seven stages that tickets will pass through during build time. Each is represented as a column on our project board in GitHub.

  1. To be prioritised. Tickets will sit in this holding area in a roughly prioritised order. The crew will talk about

which tickets to start as they free up.

  1. Figuring it out. We like this stage from Shape Up. During this stage engineers have picked up the ticket, and are still figuring out the best way to go about it.
  2. Getting it done. We also like this stage from Shape Up. When engineers are getting it done, it means they have a clear path forward and are making good progress. The steps involved are usually logged as check-list task items on the ticket.

An engineer will conduct thorough manual tests and write coded tests before shifting the ticket to the next stage. In some cases it might make more sense to test later, in which case they create new tickets for those activities.

  1. Code review. Another engineer reviews the code before we deploy it to a dev environment. As part of the review they will ensure it follows current patterns. They will also check for simplicity, and maintainability. Finally, they will do a sanity test to ensure the work functions as intended. They pass on any feedback to the coding engineer to address.
  2. Ready for testing. Once a code review is complete a ticket will shift to ready for testing. This is a holding area that signals to our QA analyst that they can test the related work.
  3. Testing. Our QA analyst will test the work against our dev environment. If an issue comes up during testing they add comments to the ticket. The ticket is then shifted back to Getting it done.
  4. Ready to deploy. Once a ticket has completed testing it shifts to ready to deploy.

Testing entire chunks

Of course end to end testing still needs to occur. Particularly for a large project such as V3.0. Here we use a combination of manual testing as well as automated front end testing using Cypress.

For entire chunks we invite our whole organisation to take part. One of our product folks and our QA analyst will work together to co-ordinate a series of tests.

We then use a channel on Slack in which everyone can report issues or concerns. Everyone can comment on issues here. Our QA analyst will reproduce and log valid issues as new tickets. We will add a label matching the type of issue, to help with future learnings.

Benefits of involving everyone in testing include:

  • Organisation wide awareness of the coming changes.
  • Improved outcomes thanks to broad feedback.
  • Higher quality as a result of more testers.

Don't be afraid to reset

At any point in the build stages we can and do revive previous stages. While there has been a lot of thinking up to this point nothing is final.

"When the rubber meets the road and we hit a pot hole we drive to the conditions."

If we see a potential improvement we will investigate it. We may change the design, or the pre-discussed solution. We agree the change between the relevant product and engineering folks. Then we feed the result back into build.

👇 You can also download the book below - it's free!

Download the book

Joyfully provides a thought-provoking alternative to Agile Scrum, Shape Up and other methodologies. How Joyous builds software is clarified in short, easily digestible language.
Congratulations!

You're one step closer to working more Joyfully. :)

Check your inbox for your link to the book.
Oops! Something went wrong while submitting the form.