Tag: lessons learned

  • Retrospectives Reloaded

    I’ve read and heard a lot advice on running better retrospectives. I’d even go that far to say that if you speak at agile event and you want an instant hit “how to run a good retro” should be very high on your list of potential topics.

    After all, this whole “getting better” thing seems really important to everyone and improvement is almost always a struggle for teams. A retrospective is a nice package; it’s pretty self-explanatory, it has a good name and it’s open-ended enough that you can adjust it to your needs.

    It’s not a new concept though. I was doing post mortems back then when agile was just a word in a dictionary. Same goal, different package. Well, maybe not that sexy, and that’s exactly why you should rather jump on a retro bandwagon to win the crowds.

    Anyway, the problem with vast majority of stuff I’ve heard or seen about running better retrospectives is simple: they are all recipes. You may apply them and they either work or not. Even if you’re lucky and they happen to work once, they quickly wear out. After all, how many times do we laugh at the same old gig?

    Then, we’re back to the square one. How to revive the next retro one more time?

    It’s because we get the wrong advice on retrospectives over and over again. It shouldn’t be “try this” or “try that.” The real magic happens when the thing you do during a retro is fresh and everyone gets involved.

    Both things are often surprisingly easy. You can count on people’s engagement when you don’t push them out of their comfort zones too far, e.g. I would say that singing in front of the team probably isn’t the best choice you might make. But all sorts of drawing are usually safe.

    And being fresh? Just think about all these funny ideas you discuss by the water-cooler. Playing an act, recording own version of movie classics, a concert, a cooking event, or cleaning a randomly chosen car on a parking lot are all such concepts. People were enthusiastic to sign up for such things. Wouldn’t they be so if it was a part of a retro?

    However, if you’re going to use a cooking event as an awesome idea for the next retro you understood nothing. Go, read the post again from the beginning. I don’t want to give you any recipe. I’m going to simply point the fact that every goddamn team on the planet Earth has a ton of fresh ideas for their next retro. It’s enough if you retrieve a single one of them.

    In fact, it’s even better. If you use one of those crazy ideas your team discussed a couple of days ago during lunch, odds are they will eagerly sign up for it.

    This is why I’m not going to be stunned with the next “better retros” session. Because, you know, I have such a session a few times a week. I just pay attention.

  • Internal Audit: Lessons Learned

    Last week we had an internal audit in our team. If you automatically think “ISO 9001” when you hear “internal audit” let me just state it had nothing to do with ISO. We have a few so called quality requirements in the company and every team should follow them but these aren’t anything like ISO procedures.

    The reason why I was interested how the audit would go is we work pretty differently than the rest of the organization so quality requirements aren’t really adjusted to our specifics. I was also curious how our non-standard process would look like in comparison to other teams.

    I caught a few things which are worth sharing.

    Small Features versus Big Features

    Our currency for functionality is MMF (Minimal Marketable Feature). MMF itself probably doesn’t tell you much and I know different teams use different sizes for MMFs. In our case average MMF can be coded by a single developer in about a week, so they are pretty small.

    On the other hand in other projects typical currency for functionality is requirement which is significantly bigger and, as such, less detailed. I’d say that on average it is few times bigger than MMF.

    Where’s the difference? I mean, besides the size itself. One very visible difference is we do significantly less paperwork. With features small enough that they can be easily analyzed and decomposed by product owner, developer and tester we don’t need to write low-level design documents or test cases or user stories. We can spend more time doing the real work instead of creating a pile of documents which tend to become outdated as soon as you hit the save button.

    Frequent Deployment versus Not-So-Frequent Deployment

    I guess there aren’t many teams which still treat deployment as one-time effort done at the end of project. But still, frequent deployment is a painful thing, especially when developers work in different team than QA engineers and QA engineers work in different team than release managers. An average project isn’t deployed biweekly. Not even close.

    On the contrary we deploy, on average, once every 9 days and I mean calendar days here. Of course it didn’t look like that from the very beginning and it cost us a bit to get there. I would say it took us about half a year to gain this pace.

    Where’s the difference? I could tell you about time-to-market, but it really depends on project whether you need time-to-market counted in days or months. The cool thing is somewhere else. With such a short cycle time we don’t have a temptation to incur technical debt. We don’t leave unfixed bugs. We just don’t. After all we might have been discussing about a single low priority bug, not a few dozens of them so the fixing cost would be so low that we don’t even start the discussion in the first place. Even if we hit a dead-end redoing the whole darn feature doesn’t take us a couple of months – it’s just a few days so it’s much easier to accept it.

    Everyone on the Same Page versus Formalized Process

    We’re small team and we’re co-located so we have fairly few communication issues. The process we follow is very simple and, what is even more important, crafted by our team. We all are at the same page.

    Most of the teams in the company follow more formalized process. There are different functional teams which are meant to cooperate, there are specific documents expected from each other by people working on projects etc. On the top of that there’s some office politics too and formalized process is used as a weapon.

    Where’s the difference? This one was pretty much a surprise for me, but it looks like our process appears as mature and effective even though it isn’t even recorded in any form. We just know what everyone should do. Whenever some obstacle appears on the way people naturally try to help each other as they see and understand what’s happening. And yes, I’m over the top just because no one forced me to write the process description down. On the other hand formalized process is often artificial for other teams who have to follow it and they sometimes focus on being compliant to the process (or fighting with it) instead of building software.

    What Does It Mean For You?

    I’m not trying to say here you should now automatically cut every feature you work on in half (at least), deploy three times more frequently than you do now and throw every procedure out. It would probably end with chaos, which your managers might have not appreciated.

    I’m not trying to say it was easy to get where we are now. It took great people, some experience, a lot of experimenting and enough time to get better at what we started doing. A year ago I would have hard time trying to point how our approach might be better than what you could find in the rest of the company.

    And I’m not trying to say you should read these advices in “this over that” manner known from Agile Manifesto. It’s not only our team which is pretty different than average team in the company but a product is also far from standard projects we build in organization so I compare pretty different environments. I wouldn’t even say that every single team in the company would benefit from implementing our approach, although a few of them definitely would.

    These are just things I learned while talking with Ola who audited our team (and many other teams in the company). For me these lessons weren’t obvious. I mean connection between frequent deployment and short time-to-market is pretty clear for anyone who understands what “deployment” and “time-to-market” means, but I’ve never really considered that very short production cycle may help to avoid technical debt too.

    And if you’re interested what the audit result was, I’d say it was pretty good. I mean, everyone likes to listen to some compliments from time to time, so we may want to ask for another audit soon.

  • Lessons Learned: Startup Failure Part 2

    Last time I shared mistakes we made while working on Overto – startup which was closed down some time ago. Today another part – things we did right and are worth replaying next time I’ll be engaged in a startup.

    Setting up a company behind

    Setting a company, which is quite an effort in Poland, from the very beginning was really a good idea. All founders knew each other well so lack of trust wasn’t the main reason standing behind the decision. We just wanted to give ourselves a bit of motivation making it a real business. However thing we didn’t really plan was that we gained much credibility showing company’s details instead of letting people think the whole thing was created by some student during holidays and won’t be supported in any way in future. Seeing a company behind a service doesn’t guarantee you it won’t die but at least you can be sure someone cares.

    Long discussions before start

    Before launching the project we had very long discussions about what we plan to do and how we want to do it. Of course not every detail was checked but when I compare Overto to other startupish projects I was working on it was really well-thought at the beginning.

    Wide range of roles covered with our experience

    We had really good pack to run an internet service. Development, administration, user support – in all those areas we had experience from the past. There weren’t many things which could have surprise us. We weren’t forced to look for a specialist in any technical aspect of building our application.

    Working in the same place

    For some time we worked in the same building which helped much in decision-making process. We could all meet ad-hoc whenever something important had to be decided. After some time we spread among different places and suddenly we saw how much value was in working in the same office.

    Despite we invested some money to fund the project I don’t treat it as a loss. For experience I gained it was a low price. Another time when I’ll decide to jump to that kind of project chances of success will be bigger.

    First part of lessons learned from startup failure

    Whole Entrepreneurs Time series.

  • Lessons Learned: Startup Failure Part 1

    Some time ago we closed down Overto – startup I was involved in. It was a failure – pretty obvious thing since we’ve closed the service. Since we learn much on our mistakes I think a reliable analysis why the business have failed should be valuable for you. For the beginning things we screwed.

    No one working full time

    From the very beginning we knew none of people engaged is willing to leave their daily jobs to commit fully to the startup. We thought we’d able to run internet service after hours. To some point that was true. As far as nothing bad was happening with the servers and the application it was all fine. We were working on new features when we had enough free time. Problems started when we faced some issues with our infrastructure. We weren’t able to resolve issues on the fly and had several downtimes. You can guess how it influenced user experience. That also backfired on service development since we had to focus on current problems instead of adding new functionalities. Lack of person working full-time and being able to deal with maintenance and bug fixing was the most important reason of failure.

    Catching the market

    That’s partially a consequence of the previous point. Since we spent majority of our limited time on trying to keep the service running we weren’t able to catch the changing market. We needed to cover new areas to move the application to another level but we couldn’t complete ongoing development. The whole project stopped in beta version and for users it didn’t look like anything was about to change.

    Lack of marketing skills

    Thin line between life and death of internet service is a number of users. For the initial period of time the numbers were growing systematically. Then we hit the ceiling of what we could achieve effortlessly. It was a time to do some marketing. Unfortunately no one of us was skilled in that area. Even worse, no one had enough time to fill the gap. That would be another stopper if we dealt with the problems mentioned above.

    Business model

    We hadn’t checked very well a business model we set up on the beginning. We were surprised a couple of our features weren’t as unique as we’d initially thought. Ironically that wasn’t a big problem since we had a bunch of ideas how to adjust the strategy in a new situation. Anyway, you should plan to change your initial business model.

    Screwed chance of selling the business

    After we’d decided we won’t be able to maintain the service in the long run we had a chance to sell it. To make a long story short we screwed negotiations starting with way too high price. We thought more about how much work we put into the project than how much it can be worth for potential buyers. Things are worth as much as one’s willing to pay for them, no matter how long it took you to produce them.

    Waiting too long with final decisions

    I think that one is a bit sentimental. Since the service was our child we were reluctant to make a decision about closing it faster and limit losses. We’ve been tricking ourselves thinking that everything would be fine while we couldn’t get the application back to work properly.

    Second part of lessons learned from startup failure

    Whole Entrepreneurs Time series.

  • Post Mortem Basics

    What is it?

    To make long story short post mortem is a little discussion after a project or important part of project. The team discusses what was done well and what was screwed. Before discussion one person gets feedback from all team members to bring a food for thought. Outcomes are used in future projects.

    What’s in it for me?

    Why to do post mortems? You usually feel what went wrong and what went well. But do all team members feel the same? Does a developer care if communication with a client went perfectly? Does a project manager understand all architecture flaws developers had to face? Does quality assurance team was aware about complex hardware installation part?

    Post mortems bring two main outcomes.

    1. Knowledge sharing. Everyone adds their two cents. All perspectives are included. Even the least important team member can bring a refreshing insight about project.

    2. Written summary. The process of preparing post mortems forces a person to prepare a piece of document before discussion. You just raise your chances to have a written summary after all which won’t fade in a week.

    How to do it?

    The scenario is simple:

    1. A person who knows a project well, ask all team members to answer three questions:
    • What went well and should be repeated in future projects?
    • What was on a good path and should be improved in future projects?
    • What went wrong and should be avoided in future projects?

    2. Then the person needs a lot of squeezing to get answers from most people from a project team, as many people won’t just answer a polite request.

    3. All answers are assembled into one summary. That’s part can be tricky as it sometimes happen the same thing will be put in different categories by different people.

    4. The whole thing is discussed and adjusted during team meeting, when everyone can add anything what comes to her mind.

    5. Post mortem can be, and should be, used to improve future projects.

    The questions can be adjusted if you feel like it, although I find the more general they are the more interesting things people will bring in answers. You don’t force people to change their perspectives, you just let them exploit their area of competence.

    When to do it?

    Definitely when a project is finished. But when you run a project which lasts three quarters you won’t remember all the issues you had to resolve during first phases after all. Then it’s quite a good idea to run post mortems after each important milestone.

    When you shouldn’t bother?

    If you don’t plan to use post mortem outcomes to improve future projects, don’t waste your time. Looking back is worthwhile only when it is a tool to improve future.