Originally posted on :

Living the life of an early stage startup has its challenges. One of the main ones has to be how to evolve and grow a product without much money.

Here at YourGrocer, we have had a great year with 30% growth month on month, so we are quite happy with our results.

A few months ago we had the opportunity to present our story to a group of product managers here in Melbourne. It was a great evening which gave us the opportunity to reflect on the product challenges we faced to this point. We looked back on our achievements & failures and found 4 areas that we believe helped define us.

Pragmatism

“Companies deliver nice products, a startup delivers a solution”

I read the sentence above on Twitter and it summarizes what we think about early stage companies. It’s so easy to get hung up on how perfect your…

View original 681 more words

Over the years I have worked in software development, one discussion I’ve often observed is the benefits of structuring software in projects, as most of the companies still do today. It seems that the de facto standard for software in an enterprise still is (rough sketch, this process varies a lot from company to company):

  1. Someone has an idea
  2. Idea is considered good enough to be invested in
  3. A team is assembled, and they discuss the vision and set up a plan
  4. Plan is executed with software being written
  5. Team hands over the project to a support team
  6. Software is being used (hopefully)
  7. Support team keeps software alive and kicking throughout the years

Admittedly, this is just a simplified version of what actually happens. Organisations differ in how they execute all of these phases and how long it takes to go from one to the other, but my main point is that there is usually a project team, that creates and puts a product live, and they will leave once everything is done. In some cases there will even be a country difference, where a local team will do the development, handing over to an offshore support team with the aim to keep the overall cost low.

As a consultant, I’ve seen the consequence of this behaviour too many times. Since no one is actively working on that product anymore, it keeps decaying, with some patches made once in a while to add new functionality. After a few years everyone realizes that it is cheaper to just throw the big ball of mud that they currently have away and rewrite the product from scratch… and the cycle starts again.

Now this helps to keep developers employed, so I should be happy about it, but from a company’s perspective, there are a few problems with this model:

It assumes that a software project is something static, that you can write, finish and then just use it for ever. Since it isn’t, the result is that bugs are being fixed by people that didn’t write the code and don’t have the same understanding of it, which results in poor support and probably more bugs.

It assumes that not only software is static, but also that the business is. So instead of thinking about software as something that is there to help an ever evolving business, it delivers a package that the business or their customers have to adapt to, probably resulting in worse business performance.

With the devops movement gaining momentum around the world, this scenario is currently changing in the development phase of a project, as Jen described here. We are starting to see more and more cases where development team support the application while they are building it, which is definitely a step forward, but it is not all that needs to happen. Supporting a product during its initial development phase is one thing, evolving and supporting it throughout its existence is another.

If a product is live and being used, than it should be evolving as the needs within that user group (and even new user groups) evolve. And if that’s the case, it should be treated as a first class citizen, where maintenance and evolution of the code walk together, guaranteeing that code doesn’t turn into legacy. It doesn’t mean it needs a team the same size as the one used to build it in the first place, but it needs a team that is in contact with the customer and aiming at evolving the product, not just patching it and keeping it running.

We should stop using support as a bad word. Actually, we should stop using the word support completely. We should start talking about software evolution.

More than few years ago I’ve read the book Agile Modeling, by Scott Ambler, and it was quite a revelation for me. I was beginning to look into extreme programming and TDD at the time, and the fact that you could (and should) write software in an evolving manner instead of the usual big architecture up front that I had studied in university was quite refreshing and empowering.

(as a side note, I actually was going to use the title Agile infrastructure for this, but I’ve promised I’m not to use the word anymore)

Not sure how many people have read the book, but it basically goes through principles and techniques that enable you to write software one piece at a time, solving the problem at hand today, and worrying about tomorrow’s one tomorrow.

If I remember correctly, there was a sentence that went something like this (please don’t quote me on that):

Your first responsibility is to make the code work for the problem you are solving now. The second one is the problem you are solving next.

Many years have passed since the book has been written. Nowadays growing software while developing is what (almost) everyone is doing. The idea of drawing some kind of detailed architecture that will be implemented in a few months or years is completely foreign in most sensible organisations.

Basically, evolving software is almost not interesting anymore. People do it, and know how to do it (as I wrote this I’ve realised it isn’t actually true, a lot of companies don’t do it or know how to, but let’s keep in mind the ones that do…).

In the meantime, a lot has evolved and new areas that were completely static in the past are becoming increasingly dynamic, the current trendy one being IT infrastructure.

The uprise of virtual infrastructure and the so called devops movement have developed tools and practices that make it possible to create thousands of instances on demand and automatically deploy packages whenever and wherever you want. However the thinking behind infrastructure within most IT departments is the equivalent of waterfall for software.

I’m not just talking about auto-scaling here, since that seems to be a concept that’s easy to grasp. What I don’t quite get is why the same thinking that we have when writing software can’t be applied when creating the servers that will run it.

In other words:

  1. Start writing your application in one process, one server*, and put it live to a few users.
  2. Try to increase the number of users until you hit a performance bottleneck
  3. Solve the problem by making it better. Maybe multiple processes? Maybe more servers? Maybe you need some kind of service that will scale separately from the main app?
  4. Repeat until you get to the next bottleneck

* ok, two for redundancy…

The tools and practices are definitely there. We can automate every part of the deployment process, we can test it to make sure it’s working and we can refactor without breaking everything. However, there are a few common themes that come back when talking about this idea:

“If we do something like this we will do things too quickly and create low quality infrastructure”

This is the equivalent of “if we don’t write an UML diagram, how do we know what we are building?” argument that used to happen when evolving software was still mystery to most people. It’s easy to misunderstand simplicity as low quality, but that doesn’t need to (and shouldn’t) be the case. As with application code, once you put complexity in, is a major pain to take it out, and unnecessary complexity just increases the chance for problems. Simple solutions are and will always be more reliable and robust.

“We have lots of users so we know what we need in terms of performance”

If a new software project is being developed, it is pretty much understood nowadays that nobody knows what is going to happen and how it is going to evolve over time. So pretending that we know it in infrastructure land is just a pipe dream in my opinion.

“We have SLA’s to comply to”

SLA’s are the IT infrastructure equivalent of software regulations and certifications, sometimes useful, sometimes just a something we can use to justify spending money. If there are SLA’s, deal with it, but still in the simplest possible way. If you need 99.9% uptime, then provide 99.9% uptime, but don’t do that and also use a CDN to make things faster (or cooler) just in case.

As it’s said about test code, infrastructure code is code. Treat it the same way.

If you ever worked with me you would know I’m not a big fan of estimates, mostly for the reasons better explained here, here and here, but there are still moments within a project where there are a bunch of stories written and teams need to have a guess on how much time will be needed

  • the project might be beginning and we need to know what is realistic or not
  • there might be go to market activities that need to be synchronised in advance
  • there might be a fixed deadline and we need to understand if there is any chance of making it or not

In cases like these, I’m still not a big fan of using planning poker or similar practices. First of all, it takes a _lot_ of time. Whoever has experienced a long session of estimation can probably remember people rolling their eyes as soon as we get to card number 54 (or around that…).

And handling the short attention span of tech people (which could probably be increased for the better) is not the only problem here. In every project there will be a lot of similar cards, and reestimating similar things over and over is probably not the most productive thing a software team could be doing, and also tests the patience of anyone involved.

Instead, what I’ve used in the past is a simple technique for group estimation (that I’m sure I saw somewhere before, so don’t credit it me for it) that will allow a group to get to some numbers with less time and effort.

1. Write all the stories you have in cards, and put them on top of a table.

2. Create 3 separate areas within the table, based on different timeframes. What I normally use is 1-5 days, 1-2 weeks and “too big”.

3. Ask the team to go over the stories and position in the categories they find appropriate. Let individual people move (and move again) cards however they want for a few minutes.

4. Let everyone go through the table and look at the cards, and observe the ones that are being moved between categories frequently.

5. Get the unstable cards and the ones in the “too big” category and discuss them within the team. Rewrite cards as appropriate.

6. Rinse and repeat if needed.

Is it precise? Probably not that much. Are any estimates precise? Definitely not. So far every time I’ve used we got a good level of results that were in the right timescale, which is probably the most you will get from software estimates anyway.

Regarding the fact that every story is not individually discussed within the team, a common argument in favour of detailes estimates, I believe there are better times to do that than when looking at all the cards with no context or experience working on them. Time to establish some story kick-offs maybe?

To close my participation at LAST Conference, I’ve presented a follow up of the talk I’ve done at LESS 2011, talking about why I believe most organisations are not set up for learning.

In the presentation I’ve explained my thoughts on why I believe change programs are often unfair to employees, asking them to embrace change, but only the one that management is putting forward.

I’ve also talked about learning within organisations, product teams, and how management teams should step back and understand their new role as leaders instead of controllers of the company.

If it sounds interesting to you, there is more info here.

Last week I’ve presented with Herry at LAST Conference about our experience of helping forming a new distributed team in Melbourne and Xi’an while transferring the knowledge of an existing product.

It was an interesting challenge since what we had a not so simple task to do, which was to

  • finish a major release of an existing product
  • ramp down the existing team in Sydney
  • create a new team distributed between Melbourne and Xi’an
  • transfer the knowledge as smoothly as possible, so the new team could start delivering new functionalities as soon as possible

It was great to talk about our experience for the first time, and the amount of questions that came from the audience showed us that it is a interesting (and controversial) topic for other people as well.

The slides are available here, unfortunately it’s hard to get all the content just based on them, but just get in touch with us if you have any.

Last Friday I’ve participated in the Lean, Agile & Systems Thinking conference, which was a one day event organised by Craig and Ed  with the intention to be a day with content “from practitioners” and “to practitioners”.

I have told both of them and a bunch of other people, but it won’t hurt to say again that I believe Australia was in need of an event like this, organised by the community and with focus in providing useful content more than anything. Its success was definitely proven by the attendance and also the twittersphere on the day, so if you haven’t congratulated them yet, don’t wait any longer!

I’m going to try to share what Ive heard around the event here, but there are definitely more places to look for, and videos of some sessions should be available soon.

Storytelling by Shawn Callahan

I’ve started the day by going to the double session from Shawn from Anecdote on business storytelling. I’ve been reading on the subject for a while and the session was very interesting.

After proving in a story telling exercise that everyone has a story to tell (and that we start telling them after we hear other people doing it!), shawn spoke about the storytelling spectrum and how we should keep the business stories on what he called the small ‘s’ version, avoiding the risk of telling something so epic that it makes us lose the engagement of our colleagues.

He also spoke about anecdote circles, where people gather to tell stories about the environment they are in, and gave a few examples and tips on how to create questions that will spark stories from people:

  • Never ask why questions
  • Use when and where
  • Make open questions
  • Ask for an example if answer is too narrow
  • Include emotions, as in “when did you feel like … ?”

Some examples of questions:

  • What have you seen lately that has surprised you?
  • When was the last time that something small made a big difference?

He finished the session talking about a story narrative and how you can construct a story, showing video examples of great story tellers and how they used the elements he was talking about.

Overall a great session, definitely recommended if you have the opportunity!

Cynefin Model by Kim Ballestrin

Kim gave an overview of what the Cynefin model is and how she is using that in her current work.

Having had a brief understanding of the model beforehand, it was really useful to see someone talking about it in action, and what it could be used for.

She gave an example of how the model can be used in classifying types of work, and then using the classification to execute them differently. She used three categories for that:

  • Complex Work (with uncertain outcome) – Create experiment and test it the cheapest way possible, to verify if it’s worth being done
  • Complicated Work (where analysis is required)  – Analyze
  • Simple Work (where outcome is certain)  -Build

She spoke about what an experiment actually is in that context and how it could be just a conversation, giving the example of a kids’ party, where parents are constantly assessing risk and acting on weak signals.

Live Below the Line by Claire Pitchford

Claire spoke about her experience as a business analyst in the Live Below the Line project, where they had to deliver a campaign website in six weeks.

She talked in some detail about the inception process that was performed with the client and the whole team during one week, helping everyone get to a shared understanding of the project, and how different tools as personas, storyboards, stories and estimation where used in the process.

It was a very practical talk about what was done, and also quite impressive that she was happy to talk about everything that went wrong, things she tried and didn’t work but also what she learned during it.

As I mentioned before, it was a great day above all. I also have presented on two sessions during the day, and will write about it in separate posts.

Follow

Get every new post delivered to your Inbox.

Join 836 other followers