Archive

Tests

If you have ever been in an Agile project, or something that looks like it, you should have heard about the concept of spikes.

For those who haven’t, spikes are the agile response to reducing technical risk in a project. In case you are not sure how to build something or fear that some piece of functionality might take much more than expected, and you have to gain more confidence about it, it’s time to run a spike. Usually timeboxed, spikes are technical investigations with the objective of clarifying a certain aspect of the project.

In my current team, we are developing a few tools that are quite specific to our context and were not sure on how to solve a few issues, so we have been quite frequently playing spike cards.

This is all not new and I’m sure most of you have done that before. If you did the way I used to do, you would write a spike card, something as “investigate technology X for problem Y“, would spend 2 days doing it and would have a response for it in your head once you were finished.

In our current context, team members were rotating quite quickly, so we were worried that any knowledge we would get from spikes could have been lost if it was just…let’s say.. in our heads.

Not wanting jut to write the findings up, as we first thought about doing, we decided to tackle the problem with the golden hammer of agile development: tests!

So, instead of writing tests to decide how we should write our code, we started writing tests to state the assumptions we had about the things we were investigating, being able to verify them (or not) and have an executable documentation to show to other people.

For example, here is some code we wrote to investigate how ActiveRecord would work in different situations:

it 'should execute specific migration' do
  CreateProducts.migrate(:up)
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_false
 end
it 'should execute migrations to a specific version' do
  ActiveRecord::Migrator.migrate(ActiveRecord::Migrator.migrations_paths, 02) { |migration| true }
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_true
  table_exists?("customers", @db_name).should be_false
 end
it 'should not execute following migrations if one of them fails' do
  begin
    ActiveRecord::Migrator.migrate(ActiveRecord::Migrator.migrations_paths, nil) { |migration| true }
  rescue StandardError => e
      puts "Error: #{e.inspect}"
  end
  table_exists?("invalid", @db_name).should be_true
  m = ActiveRecord::Migrator.new(:up, ActiveRecord::Migrator.migrations_paths, nil)
  m.current_version.should == 3
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_true
  table_exists?("customers", @db_name).should be_true
  table_exists?("another_table", @db_name).should be_false
 end

We have used this technique just a few times and I won’t guarantee it will always be the best option, but so far the result for us is having code that can be executed, demonstrated and easily extended by others, making it easier to transfer knowledge between our team.

Recently I’ve started working often with Puppet, using it to provision environments for the project I’m working on. One of the things I’ve quickly realised when using it was how long the feedback loop between committing code and actually verifying that the manifest was working appropriately. In my situation, it would be something like this:

  1. Work on puppet manifests, making a few changes
  2. Commit code to repository
  3. Wait for build to finish, which just verified for correct syntax
  4. Wait for latest version to be published on the puppet master
  5. Wait for next sync between master and client
  6. Check that configuration was applied correctly on the client

As you can see, not very simple. If you also consider that I am not very experienced with puppet, you can imagine how I ended up having to retry things in this very long loop, which can end up with anyone’s patience.

Testing Infrastructure Code

Coming from a development background and being used to having very fast feedback about code that I write made me go into a search for testing tools that could ease my pain.

Unfortunately most of the tools I’ve found where not ideal since they focused on unit testing code, as rspec-puppet. Not sure what others think of it, but in the case of puppet manifests and chef recipes, unit testing doesn’t make much sense to me, since there is no code to be executed, and tests end up looking like some version of reverse programming, where you just assert what you wanted to write, but it doesn’t guarantee that the code actually works.

Introducing Toft

Luckily, one of the options I’ve found was Toft, which is a library aimed writing integration tests for infrastructure code using Linux containers. The main idea is that you write cucumber features verifying what you expect the target machine to have (packages, files, folders, etc..) and Toft starts a linux container, applies your configuration and runs your tests against it.

It also can be run from a vagrant box, so you can have your tests running on your mac, which is quite handy.

Features can be created using normal Cucumber steps, and mostly rely on ssh’ing into the target machine and verifying what’s going on in it, so are quite easy to extend and adapt to your needs. Here is an example of a feature verifying if a specific package has been installed.

Scenario: Check that package was installed on centos box
Given I have a clean running node n1
When I run puppet manifest “manifests/test_install.pp” on node “n1″
Then Node “n1″ should have package “zip” installed in the centos box

We’ve started using it in our team when writing new manifests and have also setup a ci build with it, which is quite useful to guarantee our manifests still work over time.

Toft is still in its beginning but I believe it has quite a lot of potential. If you are using chef or puppet you should definitely check it out at:

http://github.com/exceedhl/toft

I’ve spent some time today changing the Cucumber/Capybara tests in one of my pet projects to use a page model. Since I didn’t find much stuff on the interwebs about it, why not write it here ?

The idea behind having a page model is to keep steps related to a specific page on your app in the same place, so you can reduce the repetition of steps in different tests. Is definitely not a complicated practice, and setting Capybara for it is a simple step.

Setting the context, Im using Cucumber 0.10 with Capybara 0.4.1.2 and Rails 3.0.1. Have done the standard installation steps recommended by the cucumber-rails github page.

The only modification I’ve made with the created structure is adding a folder for the page objects, so the final structure is this:

As you can see, inside the pages folder there is a home page file, which is responsible for every action/assertion related to the home page.

Nothing new with the cucumber features, which keep having it’s standard style

Feature: Manage tasks
In order to manage my tasks,
a user
wants to create tasks in different categories

Scenario: Create a new task
Given I am in the Do Me home page
When I create an urgent and important task with description "my task"
Then I should see "my task" in the "Urgent and Important" section

However, in order to create our page object, we need to inject the test driver on it, which in Capybara’s case, is the session object.

Given /^I am in the home page$/ do
@home_page = HomePage.new(Capybara.current_session)
@home_page.visit
end

When /^I create an urgent and important task with description "([^"]*)"$/ do |task_description|
@home_page.fill_task_description(task_description)
@home_page.check_important
@home_page.check_urgent
@home_page.create_task
end

And from there is just the trouble of creating the page class (or do like me, who shamelessly copied the style from here).


class HomePage
</code


URL = "/"

def initialize(session)
@session = session
end

def visit
@session.visit URL
end

def fill_task_description(description)
@session.fill_in("Description", :with => description)
end

def check_urgent
check("Urgent")
end

def check_important
check("Important")
end

def create_task
@session.click_button("Create Task")
end

And that's pretty much it, now is just choosing your preferred driver and run the tests. As you can see, not much effort for a nice improvement.

Para os que não sabem, atualmente eu estou em Bangalore, na Índia, participando do treinamento que a Thoughtworks oferece aos seus novos graduates, a ThoughtWorks University.

Bom, ontem, depois de assistir a uma sessão opcional do curso, apresentada pela Liz Keogh, eu finalmente posso dizer que entendo o que é Behaviour Driven Development, o famoso BDD. Essa é uma das buzzwords do mundo do software que circula ha algum tempo, mas devo confessar que nunca consegui parar para verificar o que era, e sempre imaginei que fosse mais uma metodologia ágil, tipo o FDD.

Acontece que o Behaviour Driven Development é algo talvez até mais simples, mas nao por isso menos interessante :-) .

BDD nada mais é do que uma “otimização” do desenvolvimento orientado a testes, que tem como sua principal característica, e ainda mais importante, benefício, o fato de codificar as aplicações em uma linguagem voltada para o que é mais importante e muitas vezes esquecido, o resultado que a aplicação tem para o cliente.

Olhando por aí, uma justificativa interessante que eu achei foi a seguinte, no blog do Dan North:

As a final thought, while I was thinking about this I realised the term “behaviour-driven” contrasts with “test-driven” in a similar way. My goal as a developer is to deliver a system that behaves in a particular way. Whether or not it has tests is an interesting metric, but not the core purpose. “Test-driven” development will cause me to have lots of tests, but it won’t necessarily get me nearer the goal of delivering business value through software. So you can use goal-oriented vocabulary in your development process as well as your code to help maintain perspective on what you are trying to achieve.

Já que todos (todos?) concordamos que entregar valor de negócio para o que cliente é o que realmente importa no desenvolvimento de uma aplicação, porque não desenvolver essa aplicação de acordo com a linguagem do cliente, de forma que até ele possa entender (mesmo que em um nível básico) o que a aplicação está fazendo, e para que serve aquele código.

É claro que não é só esse o benefício, já que muitos de vcs devem estar pensando: para que diabos o meu cliente que ver o código-fonte do software?

Mas desenvolver código-fonte de acordo com a linguagem do negócio também auxilia o desenvolvedor a entender e discutir as funcionalidades que ele está desenvolvendo, e realmente saber qual é o objetivo de ele sentar na frente do computador 8 horas por dia, o que invariavelmente resulta em código de melhor qualidade.

Comentários?

Um abraço.

Follow

Get every new post delivered to your Inbox.

Join 836 other followers