Archive

Agile

Over the years I have worked in software development, one discussion I’ve often observed is the benefits of structuring software in projects, as most of the companies still do today. It seems that the de facto standard for software in an enterprise still is (rough sketch, this process varies a lot from company to company):

  1. Someone has an idea
  2. Idea is considered good enough to be invested in
  3. A team is assembled, and they discuss the vision and set up a plan
  4. Plan is executed with software being written
  5. Team hands over the project to a support team
  6. Software is being used (hopefully)
  7. Support team keeps software alive and kicking throughout the years

Admittedly, this is just a simplified version of what actually happens. Organisations differ in how they execute all of these phases and how long it takes to go from one to the other, but my main point is that there is usually a project team, that creates and puts a product live, and they will leave once everything is done. In some cases there will even be a country difference, where a local team will do the development, handing over to an offshore support team with the aim to keep the overall cost low.

As a consultant, I’ve seen the consequence of this behaviour too many times. Since no one is actively working on that product anymore, it keeps decaying, with some patches made once in a while to add new functionality. After a few years everyone realizes that it is cheaper to just throw the big ball of mud that they currently have away and rewrite the product from scratch… and the cycle starts again.

Now this helps to keep developers employed, so I should be happy about it, but from a company’s perspective, there are a few problems with this model:

It assumes that a software project is something static, that you can write, finish and then just use it for ever. Since it isn’t, the result is that bugs are being fixed by people that didn’t write the code and don’t have the same understanding of it, which results in poor support and probably more bugs.

It assumes that not only software is static, but also that the business is. So instead of thinking about software as something that is there to help an ever evolving business, it delivers a package that the business or their customers have to adapt to, probably resulting in worse business performance.

With the devops movement gaining momentum around the world, this scenario is currently changing in the development phase of a project, as Jen described here. We are starting to see more and more cases where development team support the application while they are building it, which is definitely a step forward, but it is not all that needs to happen. Supporting a product during its initial development phase is one thing, evolving and supporting it throughout its existence is another.

If a product is live and being used, than it should be evolving as the needs within that user group (and even new user groups) evolve. And if that’s the case, it should be treated as a first class citizen, where maintenance and evolution of the code walk together, guaranteeing that code doesn’t turn into legacy. It doesn’t mean it needs a team the same size as the one used to build it in the first place, but it needs a team that is in contact with the customer and aiming at evolving the product, not just patching it and keeping it running.

We should stop using support as a bad word. Actually, we should stop using the word support completely. We should start talking about software evolution.

More than few years ago I’ve read the book Agile Modeling, by Scott Ambler, and it was quite a revelation for me. I was beginning to look into extreme programming and TDD at the time, and the fact that you could (and should) write software in an evolving manner instead of the usual big architecture up front that I had studied in university was quite refreshing and empowering.

(as a side note, I actually was going to use the title Agile infrastructure for this, but I’ve promised I’m not to use the word anymore)

Not sure how many people have read the book, but it basically goes through principles and techniques that enable you to write software one piece at a time, solving the problem at hand today, and worrying about tomorrow’s one tomorrow.

If I remember correctly, there was a sentence that went something like this (please don’t quote me on that):

Your first responsibility is to make the code work for the problem you are solving now. The second one is the problem you are solving next.

Many years have passed since the book has been written. Nowadays growing software while developing is what (almost) everyone is doing. The idea of drawing some kind of detailed architecture that will be implemented in a few months or years is completely foreign in most sensible organisations.

Basically, evolving software is almost not interesting anymore. People do it, and know how to do it (as I wrote this I’ve realised it isn’t actually true, a lot of companies don’t do it or know how to, but let’s keep in mind the ones that do…).

In the meantime, a lot has evolved and new areas that were completely static in the past are becoming increasingly dynamic, the current trendy one being IT infrastructure.

The uprise of virtual infrastructure and the so called devops movement have developed tools and practices that make it possible to create thousands of instances on demand and automatically deploy packages whenever and wherever you want. However the thinking behind infrastructure within most IT departments is the equivalent of waterfall for software.

I’m not just talking about auto-scaling here, since that seems to be a concept that’s easy to grasp. What I don’t quite get is why the same thinking that we have when writing software can’t be applied when creating the servers that will run it.

In other words:

  1. Start writing your application in one process, one server*, and put it live to a few users.
  2. Try to increase the number of users until you hit a performance bottleneck
  3. Solve the problem by making it better. Maybe multiple processes? Maybe more servers? Maybe you need some kind of service that will scale separately from the main app?
  4. Repeat until you get to the next bottleneck

* ok, two for redundancy…

The tools and practices are definitely there. We can automate every part of the deployment process, we can test it to make sure it’s working and we can refactor without breaking everything. However, there are a few common themes that come back when talking about this idea:

“If we do something like this we will do things too quickly and create low quality infrastructure”

This is the equivalent of “if we don’t write an UML diagram, how do we know what we are building?” argument that used to happen when evolving software was still mystery to most people. It’s easy to misunderstand simplicity as low quality, but that doesn’t need to (and shouldn’t) be the case. As with application code, once you put complexity in, is a major pain to take it out, and unnecessary complexity just increases the chance for problems. Simple solutions are and will always be more reliable and robust.

“We have lots of users so we know what we need in terms of performance”

If a new software project is being developed, it is pretty much understood nowadays that nobody knows what is going to happen and how it is going to evolve over time. So pretending that we know it in infrastructure land is just a pipe dream in my opinion.

“We have SLA’s to comply to”

SLA’s are the IT infrastructure equivalent of software regulations and certifications, sometimes useful, sometimes just a something we can use to justify spending money. If there are SLA’s, deal with it, but still in the simplest possible way. If you need 99.9% uptime, then provide 99.9% uptime, but don’t do that and also use a CDN to make things faster (or cooler) just in case.

As it’s said about test code, infrastructure code is code. Treat it the same way.

If you ever worked with me you would know I’m not a big fan of estimates, mostly for the reasons better explained here, here and here, but there are still moments within a project where there are a bunch of stories written and teams need to have a guess on how much time will be needed

  • the project might be beginning and we need to know what is realistic or not
  • there might be go to market activities that need to be synchronised in advance
  • there might be a fixed deadline and we need to understand if there is any chance of making it or not

In cases like these, I’m still not a big fan of using planning poker or similar practices. First of all, it takes a _lot_ of time. Whoever has experienced a long session of estimation can probably remember people rolling their eyes as soon as we get to card number 54 (or around that…).

And handling the short attention span of tech people (which could probably be increased for the better) is not the only problem here. In every project there will be a lot of similar cards, and reestimating similar things over and over is probably not the most productive thing a software team could be doing, and also tests the patience of anyone involved.

Instead, what I’ve used in the past is a simple technique for group estimation (that I’m sure I saw somewhere before, so don’t credit it me for it) that will allow a group to get to some numbers with less time and effort.

1. Write all the stories you have in cards, and put them on top of a table.

2. Create 3 separate areas within the table, based on different timeframes. What I normally use is 1-5 days, 1-2 weeks and “too big”.

3. Ask the team to go over the stories and position in the categories they find appropriate. Let individual people move (and move again) cards however they want for a few minutes.

4. Let everyone go through the table and look at the cards, and observe the ones that are being moved between categories frequently.

5. Get the unstable cards and the ones in the “too big” category and discuss them within the team. Rewrite cards as appropriate.

6. Rinse and repeat if needed.

Is it precise? Probably not that much. Are any estimates precise? Definitely not. So far every time I’ve used we got a good level of results that were in the right timescale, which is probably the most you will get from software estimates anyway.

Regarding the fact that every story is not individually discussed within the team, a common argument in favour of detailes estimates, I believe there are better times to do that than when looking at all the cards with no context or experience working on them. Time to establish some story kick-offs maybe?

To close my participation at LAST Conference, I’ve presented a follow up of the talk I’ve done at LESS 2011, talking about why I believe most organisations are not set up for learning.

In the presentation I’ve explained my thoughts on why I believe change programs are often unfair to employees, asking them to embrace change, but only the one that management is putting forward.

I’ve also talked about learning within organisations, product teams, and how management teams should step back and understand their new role as leaders instead of controllers of the company.

If it sounds interesting to you, there is more info here.

Last week I’ve presented with Herry at LAST Conference about our experience of helping forming a new distributed team in Melbourne and Xi’an while transferring the knowledge of an existing product.

It was an interesting challenge since what we had a not so simple task to do, which was to

  • finish a major release of an existing product
  • ramp down the existing team in Sydney
  • create a new team distributed between Melbourne and Xi’an
  • transfer the knowledge as smoothly as possible, so the new team could start delivering new functionalities as soon as possible

It was great to talk about our experience for the first time, and the amount of questions that came from the audience showed us that it is a interesting (and controversial) topic for other people as well.

The slides are available here, unfortunately it’s hard to get all the content just based on them, but just get in touch with us if you have any.

Last Friday I’ve participated in the Lean, Agile & Systems Thinking conference, which was a one day event organised by Craig and Ed  with the intention to be a day with content “from practitioners” and “to practitioners”.

I have told both of them and a bunch of other people, but it won’t hurt to say again that I believe Australia was in need of an event like this, organised by the community and with focus in providing useful content more than anything. Its success was definitely proven by the attendance and also the twittersphere on the day, so if you haven’t congratulated them yet, don’t wait any longer!

I’m going to try to share what Ive heard around the event here, but there are definitely more places to look for, and videos of some sessions should be available soon.

Storytelling by Shawn Callahan

I’ve started the day by going to the double session from Shawn from Anecdote on business storytelling. I’ve been reading on the subject for a while and the session was very interesting.

After proving in a story telling exercise that everyone has a story to tell (and that we start telling them after we hear other people doing it!), shawn spoke about the storytelling spectrum and how we should keep the business stories on what he called the small ‘s’ version, avoiding the risk of telling something so epic that it makes us lose the engagement of our colleagues.

He also spoke about anecdote circles, where people gather to tell stories about the environment they are in, and gave a few examples and tips on how to create questions that will spark stories from people:

  • Never ask why questions
  • Use when and where
  • Make open questions
  • Ask for an example if answer is too narrow
  • Include emotions, as in “when did you feel like … ?”

Some examples of questions:

  • What have you seen lately that has surprised you?
  • When was the last time that something small made a big difference?

He finished the session talking about a story narrative and how you can construct a story, showing video examples of great story tellers and how they used the elements he was talking about.

Overall a great session, definitely recommended if you have the opportunity!

Cynefin Model by Kim Ballestrin

Kim gave an overview of what the Cynefin model is and how she is using that in her current work.

Having had a brief understanding of the model beforehand, it was really useful to see someone talking about it in action, and what it could be used for.

She gave an example of how the model can be used in classifying types of work, and then using the classification to execute them differently. She used three categories for that:

  • Complex Work (with uncertain outcome) – Create experiment and test it the cheapest way possible, to verify if it’s worth being done
  • Complicated Work (where analysis is required)  - Analyze
  • Simple Work (where outcome is certain)  -Build

She spoke about what an experiment actually is in that context and how it could be just a conversation, giving the example of a kids’ party, where parents are constantly assessing risk and acting on weak signals.

Live Below the Line by Claire Pitchford

Claire spoke about her experience as a business analyst in the Live Below the Line project, where they had to deliver a campaign website in six weeks.

She talked in some detail about the inception process that was performed with the client and the whole team during one week, helping everyone get to a shared understanding of the project, and how different tools as personas, storyboards, stories and estimation where used in the process.

It was a very practical talk about what was done, and also quite impressive that she was happy to talk about everything that went wrong, things she tried and didn’t work but also what she learned during it.

As I mentioned before, it was a great day above all. I also have presented on two sessions during the day, and will write about it in separate posts.

It is well known that Agile (not sure about the big/small “A” thing, it always gets me confused) has gone mainstream nowadays. With big companies and conferences endorsing it, long gone were the days when you actually had to convince people this was a good idea… and that’s great!

However, with the early/late majority now adopting Agile, we are not talking about small companies anymore, and that means that the challenge is not how to get teams delivering better, but whole departments and organisations. So yes, scaling agile is one of the current challenges, with the preferred approach being the dreaded (at least by me) “Change Programs”.

I don’t believe agile is a fad or just a small team thing, since empowerment, short feedback loops and delivery of results it’s never going to be a fad and isnt IT specific either. There are many other examples, in different industries (errm… Toyota?) which show that the same principles can be applied successfully in a much larger scale, on complete different problems.

The problem begins though, when by scaling agile, companies try to scale the practices, keeping the control mindset, instead of scaling what’s important, i.e. the principles.

It is common to see companies, for example, adopting a hierarchical structure to create multiple agile teams, all “reporting” to a main office, killing most of what was good about the idea.

If agile in the organisation is the challenge, we need to think about empowerment, short feedback and delivery of real results for the whole company. Unfortunately, the more common behaviour is to talk about bigger walls, how to standardise velocity across teams and synchronisation of iterations.

And if we go back to the roots, and look at the manifesto, I believe we can find a good guidance of what can be done:

Individuals and interactions over processes and tools: Instead of scaling processes and tools to control teams, why not increase the ability of individuals to traverse the organisation, making software developers talk to real customers and sales people talk to qa’s?

Working software over comprehensive documentation: Scale by making a working product the final goal, removing the necessity of internal product briefs, extensive training of sales staff and long delivery cycles.

Customer collaboration over contract negotiation: It is actually impressive how this still makes sense now. We just need to change the customer. It’s not the product owner anymore, it’s _the_ customer, the one who buys your product and pays the bills.

Responding to change over following a plan: Feedback, feedback and feedback. Not from a showcase though, but from the market.

Yes, if it feels like you’ve heard these ideas before, it’s because they are not new. Product teams are the way to scale agile in my opinion. Small and independent groups of people that can get to the best results in short iterations with feedback from the market, in a Lean Startup style. The game shouldn’t be about execution anymore, it is about innovation.

Ive been involved in quite a few discussions about shared code lately, mostly because I’ve been working in a team that has developed a set of internal tools that are going to be used across the company. One of the common topics to debate in this area is what happens next?

After the main part of development has been done, who is responsible for mantaining and evolving this shared codebase?

I’ve written a little bit about shared code and I don’t think it is the best solution for most cases, but in a few situations sharing is necessary, so how should we handle it?

As any popular topic, opinions are diverse, but here are my two cents to the conversation:

It depends : )

The way I see it, there are two main types of shared codebases that can exist in a company, and each of them should be treated in a different way.

Components that are not critical for any team

This is the case of the famous “utils” package, or any other component that was once written because someone needed it and then was reused since it was useful for other people. The main characteristic here is that this code doesn’t sit on the critical path of any team/application. In other words, I could choose to use it or not if it fills my needs.

In this situation, I believe that using an open source model is just fine. Think about your packages as an open source library, that someone can use it or not if they want, and if at some point it needs to be maintained or evolved, the people using it can spend some time doing it.

It doesn’t matter that much if they break things or take the codebase in a direction no one expected, since people can always use older versions or just find a replacement for it.

Components that are critical to teams 

Now this is a more delicate situation. If teams depend on your component working all most of the time, I believe the open source model is not appropriate anymore.

As I mentioned before, developing internal applications should also use a customer-centric view, and that takes time and effort. As the number of users grow and diversify, it’s important to have people in your team that can steer the development in the right direction, thinking about the roadmap for the future and also development quality.

Being a critical component, the open source model makes it hard for anyone to own the responsibility to maintain the code, and hard decisions that will have to be made start to get postponed, while patches start happening more and more.

It doesn’t take long until everyone is attached to a codebase that no one really understands and is afraid to change. In other words, legacy is written.

In this situation, the biggest step is to realize you have a product to maintain, and that will cost, as it costs to maintain any product. There is the need for a focused team that can plan the future and guarantee that users are happy with what they are receiving, and avoiding the pain will just guarantee a greater amount of it in the future.

That’s it!

“Software, agile and some nonsense” - That used to be the description of my blog.

With the extreme popularisation and use of the word “agile” in the IT industry from anyone who wants to be part of the in crowd, sometimes referring to practices and thoughts that are well beyond what I believe is correct, I’ve decided to just stop using the word.

I believe the risk of being misinterpreted when you mention “agile” nowadays is bigger than the benefit you get out of using it, so let’s start talking about feedback, transparency and delivering value and see where that will lead!

A common topic when discussing IT in organisations is how to structure software teams, more specifically, how do we divide work when a company grows enough that one team is not enough anymore, and how do we deal we shared code?

This is not a simple question and probably deserves a few blog posts about it, but now I wanted to focus on one specific aspect of it, which is the duplication of effort.

A simple example would be if my company has multiple teams delivering software, they will eventually find the same problems, as in how to deploy code, how to do logging (the classic example!), monitoring and other things like that.

And in almost all organisations I have been, the response is unanimous: we should invest in building shared tools/capability to do that! It’s all in the name of uniformity and standardisation, which must be a good thing.

Well, as you might imagine at this point, I disagree.

Building shared applications in IT is the equivalent of economies of scale for software, and it is as outdated in IT as it is in other industries.

In knowledge work (as we all agree software development is at this point), no effort is really duplicated. You can develop the same thing 100 times and you are most likely getting 100 different results, some quite similar but yet different. And that small difference between each other is where innovation lies.

This is the trade off that companies should be aware of. If people don’t have freedom to experiment and look for different solutions for problems, is very easy to get to a place where everyone is stuck in the old way of thinking. And that’s what companies are really doing when making people use common tools, shared applications, etc.. They are putting cost-saving in front of innovation, a lot of times without realising it.

And I don’t mean we can’t all learn from each other and reuse solutions when they are appropriate. After all, there is still a place for economies of scale when appropriate.

We are all standing on the shoulders of people that came before us and should keep doing it, even internally to an organisation. But that should be an organic and evolutionary process, not one that is defined by an architecture team.

Follow

Get every new post delivered to your Inbox.

Join 836 other followers