Agile methods are good, actually

After reading “just” the introduction of the agile modeling (about 30 pages), I got two feelings: at first, I hated it but later, thinking more about it, I liked it.

First, all “instructions” passed were about how a plan must be flexible, how things should be ignored and that you should consider all information incomplete or incorrect. We all know that’s true on a real world, but that should be not something you should expect: you should always aim to the “perfect” score; when you say “oh, we know it may be incomplete” it would probably be, ’cause that’s expected. When you say “it should be perfect”, then people would aim for perfectness. The annoying part is that what is expected to be incomplete and incorrect is the analyst work; the developer work should be perfect, as it is the final product and what is delivered to the costumer. So, basically, agile methods seem to point that analysts can do wrong, but developers shouldn’t. And, being a developer, I hate this position: it should be expected that everyone involved stay in the table till the end and say, even if just to themselves, “this is my responsibility”. That way, everyone will try to do their best.

Later, thinking more about it, I came with the conclusion that agile methods are, actually, a good thing. When you expect someone to do a poor job, you automatically reduces his/her role in the project; since their responsibility is diminished, their relevance is diminished too. Now, for all those years working in the IT field, I learned that analysts, with a few notable exceptions, really don’t want to be involved; they don’t want to know the whole picture, they care only about their part. I can even remember when I was in college and, talking to future-analysts, a great part of them said learning code was a stupid thing and that analysts should never ever get near any code; classes that talked about code were stupid, futile and useless. The problem is that code is part of the model: if you don’t know how your code works (in, say, a framework or the inherent model of the language), then you can’t get the whole picture and, thus, your model will be incomplete (and it gets into the “agile” idea — which I pointed shows that, knowing it is incomplete, it can be ignored and become irrelevant). What usually happens is that a few developers get the whole picture and became the holders of the whole model, becoming the real pivots of every change in the model, be it a change in the business rules or the algorithms inside the software. In other words, analysts became irrelevant and real developers become the center of the changes, since poor programmers would become just a few peons in the process, easily replaceable.

And that’s what me (an some college ex-colleagues) think about today’s IT world: analysts that don’t carry any responsibility beyond their cubicles and developers that can’t think for themselves. Agile methods actually turn both parties irrelevant leaving the field for the good, interested and capable developers.

[I know it sounds a lot like a rant, but it actually is: after working with several companies, I’m tired of people who can’t see beyond their functions and don’t care about the product they are building.]