(April 1, 2004)
Maverick Development Model is concerned with identifying the best
practices for a given situation. A model development process is one that
produces the correct development process for the current situation and
A model of development differs from
development methodologies (such as Waterfall, Spiral, or Agile) in that
the goal or desired outcome is the driving force, ever present, and
never forgotten. Methodologies are approaches to achieving goals, but to
often the reason for the steps of the methodology have been forgotten
and no one knows the reason things are done except that the method
There are many good practices that have
been accepted over the years and some of them will be presented as well
as personal insights and philosophies concerning the software
development process and the roles that make it up.
is time for software developers to become computer scientists. The
computer scientist is a role that encompasses all of the
responsibilities necessary to complete the tasks of software
It is time for managers to become leaders
and to forgo the methods of management that do not improve the
development process. The process exists to help the workers and not the
other way around were the workers are there to satisfy the process. The
tail should not wag the dog.
1987 I have been participating in software development. I have watched
the struggle between the programmers and product marketing, where the
later sets the features and the delivery date and then complain about
the quality, missed dates, or missing features. This struggle still
exists and remains basically unchanged to this day.
have witnessed the introduction of management to the software
development process. This management consisted of processes with the
idea of making the software development predictable, traceable,
controllable, and manageable.
Several times over my
career my various employers have trained me to perform formal reviews
and in all of the roles of a review session. At each company, after the
training and within a few short weeks, the review process ceased and the
programmers went back to their old ways.
witnessed the acceptance and use of waterfall methodologies and their
many variants. I have watched management follow the methodology with
only the rigor that a true manager can attain. The requirements were
gathered by product marketing and from that a design was made. This
design became a hard and fast contract. However the design seemed to be
used more as a defense exhibit (CYA) because every product date was
always missed and development dragged on with no apparent end.
have watched the release dates for the product set by the next
occurrence of COMDEX and not by anything that actually pertains to the
development of quality software. I have personally asked for
requirements to be updated while in the development stage of the life
cycle only to have management say that the waterfall method did not
allow you to go back and change anything. I was flabbergasted! "The
method doesn't allow" is a statement that just doesn't make sense. A
method is not alive and does not make choices or demands. Management
didn't allow the change because they didn't understand the goals the
method was trying to reach. Management didn't allow the change because
they can't change the process because they didn't understand the
development process in the first place. As hard as I tried to indicate
that changes were needed it was like talking to the wall. Where are we
today? I feel we have made some evolutionary improvements that place us
at a point to achieve revolutionary improvements. The idea is to
recognize goals and create custom processes to achieve those goals for
the current situation. One shoe does not fit all.
working for the Department of Energy I struggled with the lack of
contact the developers had with the customer. As a developer I felt
completely out of touch. I had a written description of what they
wanted. The description captured the letter of the law, but I had no
feel for the spirit of the law. In 1992 I presented this analogy to my
"Once there was a famous painter who was
retained by the head of a European State. His time was so valuable that
there was an assistant assigned to the artist. This assistant was to go
out and find work for the artist and to gather the description of the
desired painting. The assistant met with a wealthy merchant and
convinced the merchant to commission a painting of his son. The
assistant promised the painting finished in three weeks and returned to
the artist. The assistant described the merchant's son to the artist and
the artist painted a young boy in a silken blue suit holding in his
hand a hat with beautiful plumage. The assistant took the painting to
the merchant who agreed that the painting was exceptional but in reality
it did not look anything like his son and that he wanted his son
painted with is brother standing in a lane by a tree outside of their
home. Needless to say the artist never delivered exactly what the
merchant wanted. Each version became closer to the desired outcome but
never just right."
My argument was that computer
programming is only one small piece of the training received at the
University. I hoped that we could be more than just a programmer; I
hoped we could be a computer scientist. I wanted to apply what I had
learned in requirements gathering, systems analysis, effort estimation,
and design as well as write the code. However, my skills in writing code
were expensive to acquire and my employer deemed my time too precious
to be spent doing non-programming tasks that could be delegated to less
Also I have dealt with another
extreme where managers that thought the developers where "prima donnas"
that did not work hard, that waited until the project was well beyond
the point of no return and then asked for raises and stock options and
held the company hostage. Also the managers thought the engineers padded
their dates so they could take it easy. There where managers that said
they were sick of the attitudes of the developers and that if they were
not happy then they should just leave because they are easy to replace.
it wasn't enough to have managers without any real understanding of how
to develop software I have watched developers dup management into
believing heroics are required to develop software. Duped into believing
that there is only a few that can develop the product and the rest of
the developers are to take supportive roles. I have watched these clever
developers build empires to protect themselves and become the very
prima donnas that management complained about.
seen excellent developers black balled by these empire builders because
the empire builders realized that this person was skilled enough to
replace them. I have watched these empire builders wait for a power
vacuum to occur and rush in and fill the gap. I have seen them target
other developer's features that are late and secretly criticize the
developer. Then, while the developer's reputation is in doubt, the hero
works many late hours and rewrites the feature and presents it to
management who in turn praises their effort and commitment to the
company. The other developer is then regulated to a subservient position
and is basically layoff fodder for the next round.
have watched managers closely and found that very few managers excel at
their task. Why? Leadership was the original goal, and management
somehow was adopted. If a manager does not write code then what does he
bring to the development effort? I have noticed that most managers
consider their job to be reporting state in meetings. Therefore, they
are only working when they are generating reports that capture the state
of development and when they are in meetings presenting their reports.
Managers are rewarded on their reporting skills and thus the amount of
reporting grows beyond their ability to control. Then they assign the
developers to help by making intermediate reports that will be used as
sections in their master reports. Soon developers are spending one or
two hours a day generating reports of the current state of the
development effort. That state of the project has changed before the
report is finished and the report captures only a ghostly
In 1997 a friend gave me a copy of
"Maverick: The success story behind the world's most unusual workplace.
Ricardo Semler. Warner Books. 1993." I never read books on other
people's success stories because I always figured that the revenue from
their book was their success story. I reluctantly began to read the book
and soon found a kindred spirit in Mr. Semler. Everything he said made
sense to me, even though his business was in manufacturing. I could see
benefits for software development. Eagerly I summarized his book and
presented it to management. They laughed at the idea of publicly known
salaries. When designing a new workspace for development I suggested
that they give the developers a budget and let them pick their desk and
chair. They responded that the furniture was an asset of the company and
that the developers would pick a hodge podge arrangement that would
look terrible. Wow, it was almost word for word from Semler's book when
they were trying to select uniforms for their workers. I name my
development model after his book, the Maverick Development Model. Mr.
Semler has never met me, and the name does not mean to suggest he has
supported this effort or endorses it in any way. It is my way of giving
credit to one that has inspired and reinforced my ideas.
being formally mocked for presenting the Maverick concept I went about
my work trying to inject those ideas into the environment whenever the
possibility presented itself. Needless to say, I was more often labeled
radical, a loose cannon, and someone that had no idea of how real
business works. After several years of frustration I decided it is
easier to be quite and draw my paycheck. I lost all love for the
software development process and kept my head low to avoid the
laser-targeting sights of the layoff assassins.
was unhappy with anything to do with the process I focused solely on
object-oriented development and how to model systems in a way that the
code was easy to maintain and be bug free. Before agile methodologies
required unit test I had a personal policy of testing each piece of my
code before introducing it in the system. My rationale was, if you don't
have any confidence in you own code then how can you determine where a
bug lies when you integrate your code with someone else's? I felt each
piece of code had to be solid so that you can build upon it. You had to
have confidence in some piece of the system; otherwise you will never
know if you fixed a bug in the right place. For example, if the compiler
is suspect, the third party libraries are suspect, the code from the
rest of the team is suspect, and your code does not have some proven
level of quality, then when the system fails where do you look for the
problem? I have heard "compiler bug" cried probably a hundred times. In
all of those, I have seen it to be the case only twice. Why do I share
this? Because the evolutionary path that has led us to Agile process and
Test Driven Development has be trodden by many of us.
rationale of building on a solid foundation came from a few years of
porting code. I have seen the memory manager rewritten too many times.
You remember the guys that said, "You can't use alloc, it is
inefficient. I read somewhere that you have to write your own to get any
kind of performance from dynamic memory allocation." Always the reason
was to improve performance. This piece of code then became a roadblock
in the porting effort. Also, the code required the complete attention of
a developer. I would say, "We are not in the operating system services
business. We shouldn't be rewriting what the OS should provide. If the
OS is not working, then develop to a better platform and quit supporting
this garbage." They dismissed my comments as to hard-line. But I
learned that we did not have a solid foundation because of all of the
in-house developed services. The core service of memory management was
now always a suspect. Many bugs were found in the memory manager. When
something didn't work, were did you look for the problem? You had to go
to the lowest layers and verify that the memory manager was not the
source of the problem. Ironically the memory manager became the bane of
the main platform as well. The system was taken to Intel to profile the
product and suggest improvements. They pointed out that one of the
trouble spots was the proprietary memory manager. It did not perform as
well as the standard memory manager provided by the current version of
the OS on the current hardware. The reason was that the OS people
continued to develop and improve their services and to optimize their
services for new hardware. The proprietary version, once debugged, was
left "as is" and in a few short years become obsolete for the advances
in hardware and operating systems. I relate this story because it is
necessary to recognize the facets and subtle interplay in software
development that the development process must recognize and support. If
the memory manager would have had regression tests then it would not
have been a suspect with every bug. If the developers would have
delivered on the core business instead of optimizations of OS
functionality they would have been better off in the long run. Agile has
concisely described this as building the functionality that delivers
value first, do it with the simpliest implemenation that meets the
requirements, and use a test first design technique.
those times of which I described there have been some events that have
occurred and set the stage for Maverick Development. One is the
development of Java. The other is the acceptance of what is now termed
Java has helped by reinforcing my idea
that you shouldn't develop services that the OS should provide. This may
be termed "don't reinvent the wheel" but it is much more than that. It
is about letting others have their core business and trusting that they
might be just as smart as you. It is amazing that the developers that
were so concerned about performance and writing their own memory manager
would ever accept Java.
Java was slow compared to
anything we had seen for years. So, why did the same engineers that
wrote their own memory manager embrace Java? Because, it was time to
rewrite the system again or they wouldn't have a job! They had to sell
management on another version of the product. They dropped the old
buzzwords of web services, n-tier, peer-to-peer, and all the others.
Regardless of the reasons the acceptance of Java has opened the door for
me to try again to change the way software is developed. I have been
quite for too many years. I love software development and it is time to
present ideas and hopefully start meaningful discussions that lead to a
revolutionary change in software development.
methods have really helped in motivating me to gather my ideas and write
this document. Honestly I am really shocked that agile methods have
been accepted. The agile methods are addressing the same issues I have
been concerned with for years. One of which is that of "the process
doesn't allow us to go back and change the requirements." Finally in the
name of agility you can go back and fix things that are wrong and work
in a reasonable manner. But alas, as soon as you specify your ideas as a
methodology, and there is some missing step for a specific situation,
some manager that only knows the letter of the law will still say, "you
can't do that, the process/methodology doesn't allow it." I have been
doing research on XP for large teams and XP with Integration Testing. I
have found many statements like, "XP doesn't do that" or "XP only does
unit testing and Integration testing has to be added in a Waterfall
fashion." Why do these managers "lock" XP down? I feel they lock it down
because they do not understand the real problems and the appropriate
It reminds me of a 300 level fine arts class
I had in college. Being a CS major and having artistic skills are not
common and the application of logic to art is probably not something the
Art Department was accustomed to. I remember learning of the masters,
the great artists that had created a new style and had altered a norm
and had become famous for their unique perspective and talent. Then in
art class the instructor teaches you how to hold the brush in your hand,
how to make the strokes, how to portray depth, how to do each and
everything and then grade you on your conformance. If I painted just
like Picasso then I would never be famous. I would be another painter
that painted like Picasso. I had been taught what made a great artist
and then told not to do the very things that made them great. I had been
exposed to the model but taught to follow the method.
do I go into these past experiences? They form the foundation of my
reasoning. I have always said, "It takes a village to make a village
idiot." Without the foundation of understanding then methods are
followed for the wrong reasons. I heard this story once.
woman was cooking a ham. Her daughter-in-law notices that she cuts the
ends off of the ham before baking it. She asks why. "Because it makes it
taste better." Does it really make it taste better? After some
investigation it was learned that the reason why she cut the ends off of
the ham is because her mother did the same. The reason her mother did
it was because she had seen it done by her mother. The reason the
grandmother did it was because her baking pan was too small to hold the
ham unless she cut off the ends.
Why do software
development methodologies have the steps that they do? In the beginning
the reason for the method was known. The key is that reason must not be
lost and must be understood.
As computer scientists,
instead of just developers, we have been trained to act in all the roles
of software development. If your software process falls short in some
area, we can fix it. We can have emergent processes if managers will get
out of the way and if developers can wake up and become computer
scientists and if we will work towards a goal and take whichever road
gets us there. Life may be a journey and the road taken important, but
software development is a destination. We have to lay down the pride, we
have to give up the heroics, and work as a team. I guess this means we
have to understand what work is. We have to understand why methods exist
and what goals they are trying to achieve. We have to wake up and be
thinking individuals that question everything and that do not want fame
Maverick Development is about goals and
understanding. It is about accomplishing the goal the best way possible
at that time. It is about changing anything and everything including the
way you work, the way you think, and the way you interact with others.
It is about removing dead end jobs. It is about real rewards. It is
about trust and selflessness.
is something that we all know when we are doing it and we also know
when we are not. However, to give a firm explanation of one's work is
difficult. Work is physically described as a force applied through a
distance. In creative processes there is no physical object to be moved.
That is why effort is used to describe creative work. Distance is also a
non-physical attribute of creative work. Progress is used to describe
this aspect. So, with effort and progress we work.
methodology, work is anything that progresses from the current step to
the next. In a model, work is anything that finishes some attribute or
artifact of the goal. Thus said, work on a method is very different than
work performed on a model.
Traditionally one of the
tasks of management is to measure work. The measurement of work is an
intrusive inspection that interferes with the work that is being
measured. The benefits of measurement should out way the costs of
acquiring the measurement. If measurement activities are not worth the
effort to perform then why continue to measure in the same way? Think
and change it. Make it worth the effort. I often wonder at what point
thinking was disallowed. It sure seems like it has been disallowed.
are smart and once they learn the rules of a game they try to win. When
developers are monitored and their performance measured, they learn to
perform by what is measured. If lines of code are measured, you will get
lines of code. If number of bugs found is the measurement, they will
find bugs. The problem is we have forgotten why methods measure these
things. Management wanted to measure quality and then they created a
process. Somewhere down the road we ran amuck.
measurement process uses terms like defect discovery rate and whether or
not it is decreasing. It was anticipated that if the defect discovery
rate is decreasing then the quality of the product was increasing. There
are a lot of assumptions here, especially that all bugs are created
equal. If one thousand bugs where found and eliminated and the product
ships with just one bug and that bug manifests itself you have a
problem, especially if that one bug happens to loose and corrupt data.
algile processes are not perfect. Agile and iterative processes have
the expectation of seeing the additions of new code into the product to
be rather constant over the iteration. Instead we see that the last
three days of the iteration the number of new or modified files in the
system increase dramatically. Why? It could mean many things. Managers
readily accept conclusions from journals and well-known experts instead
learning the current reason behind the problem for themselves. Without
the understanding of reasons the ability to correctly apply the lessons
of these experts rarely happens. Simply said, "You can't rely on books,
papers, and publications to solve your problems. You have to get in
there and figure things out for your specific situation and this
requires understanding of the process and how work gets done."
reality the check-in rate could be the symptom of any number of
problems. Maybe the iteration is to long or the build process is too
painful. Maybe integration is too painful. It could be that the Use
Cases where unclear and too much time was needed to design the code.
Maybe it was dependencies on other modules such that coupling became a
bottleneck. This list could go on and on. Most likely the reason is not
simple, but is complex and there are many reasons that differ from each
team and each person. There is no simple fix and there might not be a
complex fix. It might be the fact that this is the way it is.
point is that it takes more than methods and management to understand
the development process and to identify when progress is being made and
when work has been done.
In Maverick Development work
is defined to be any task that moves things forward to the ultimate
goal. Most processes do not measure time spent helping other team
members. Traditionally if help has been given to a team member it is not
unusual to label the person that needed help as a weak link. Being
labeled a weak link is like putting strawberry jam in your pockets
because in the next layoff you are toast. So, people will not ask for
help. Is that what management wanted? I doubt it. This is what they get
and often they cannot recognize the environment they have created. In
Maverick Development helping another developer get work done is work.
Helping the team move forward is work. It does not affect the person's
task list in a negative way. If the person does not finish their tasks
for the iteration yet they worked hard on unforeseen tasks then the
person is still successful and considered a great worker.
helping others is not rewarded you will see symptoms of it. You will
hear developers say things like, "I didn't get anything done today
because of interruptions" or the person that needs help will passively
say "I hate to bother you right now for some help but I am stuck" or
they will only ask for help through non-intrusive manners such as
Work together and be concerned for others
success. If managers are counting and measuring, then developers will
only work on what is being counted. If there is no metric on how many
others you helped to be successful today, then no one will help others.
Maverick Development demands less measurement OR it demands management
to truly measure work, which would be a very difficult task that most
managers are not up to. Maverick development raises the bar on
developer has to be more than just a code writer. The bar is raised on
the developer. For lack of a better term I say the developer must become
a computer scientist. I chose the term computer scientist because
during my formal education as a computer science student I learned all
the aspects of developing software. I learned about programming
languages, algorithms, requirements gathering, quality assurance, effort
estimation, software cost, motivating programmers, ...
computer scientist is a leader in the area of software development. The
computer scientist understands that heroic efforts can destroy teams
and skew business perspectives into believing heroics are necessary. The
computer scientist understands when to use multiple inheritance. A
computer scientist understands runtime behavior as compared to static
behavior. A computer scientist understands the costs of coupling and
poor cohesion. The computer scientist understands activities that
improve productivity and activities that slow things down.
I need to define the type of manager to which I will refer.
Historically developers have been promoted to team lead, then manager,
then director, and on up the ladder. I have also noted there are two
paths to manager. The other path comes from the business side of the
company instead of the technical side. In the organizations I have
experienced often the manager has a technical role. The manager reviews
and approves designs and architectures. Also the manager choses the
development methodology, the amount of reporting, the format for
meetings and documents, coding styles, and other things that directly
relate to the development process. In addition to these responsibilities
these managers also perform reviews, do budgets, and report status. The
manager often uses the position to inject their desire in one situation
and delegate when they have no opinion. This inconsistent use of
position is not helpful.
Management has to come to its
own in the Maverick Development Model. What does this mean? They have to
have clear understanding of all of the aspects of development. First
they have to understand each individual. Why, because they are managing
people, they are not managing skill sets and resources. Everyone
communicates in different ways. Everyone expresses concern in different
ways. Everyone is motivated in their own way. I would imagine that a
good manager , like a good poker player, can read a person as well as
listen to a person.
Managers have to understand why
different methods and practices are available and know which practice to
apply to a situation. Managers are too often like the ship's captain
that knows nothing about sailing. As long as the wind blows the ship
along the right course everything is fine. However, as soon as there is a
problem that requires knowing how to control a ship the so-called
captain is in trouble.
The skill set of a manager has
to be greater than the average developer on the team. A manager in
Maverick Development must become a leader. A leader is much more than a
Patton was a military leader. He did not care
if politicians were pleased with him. He did not care if people did not
like him. The Vietnam conflict was managed. It was not lead. The
difference is clear. The "feeling" about the situation is almost
tangible. Vietnam was managed and the methods were political and had
nothing to do with winning a war or fighting for values. The managers of
the Vietnam War talked in statistics. How many shots fired. How many
casualties. How many sorties. How much money had been spent and are we
within our budget.
From Mr. Semler's book I paraphrase and quote a section concerning management.
was convinced that the Hobart plant lacked organization, ambition, and
the necessary controls. He would arrive at 7:30 a.m., to find himself
alone in the office until about 9:00 a.m. At 5:30 p.m. people would
leave for home and Fernando would stay until 9, 10, and sometimes 11
p.m. This didn't please him and everyone knew it.
worked long hours too, and their families were beginning to complain.
They were convinced that the hours were temporary, until the company had
digested its acquisitions. 'It took us almost a decade to learn that
our stress was internally generated, the result of an immature
organization and infantile goals.'
Fernando was firing people and changing things constantly.
the Hobart plant, and all over the new Semco, we could track with great
precision virtually every aspect of our business, from sales quotes
to.welding machines. We could generate all sorts of reports almost
instantly with dazzling charts and graphs.. (I)t took us a while to
realize that all those numbers weren't doing us much good. We thought we
were more organized, more professional, more disciplined, more
efficient. So,.how come our deliveries were constantly late.'
Work hard or get fired was the new motto. Everyone was being pushed forward instead of being self-propelled.
this time I often thought of a business parable. Three stone cutters
were asked about their jobs. The first said he was paid to cut stones.
The second replied that he used special techniques to shape stones in an
exceptional way, and proceeded to demonstrate his skills. The third
stone cutter just smiled and said: "I build cathedrals.'"
Development is based on leadership. Leadership is something above
management. All leaders have management skills. The converse is not
true. The bar has to be raised on managers. They have to be leaders.
methodologies depend upon emergent processes. Managers cannot control
this type of process because of its very nature. Maverick Development
goes where ever is necessary to get the work done. When an unexpected
situation arises we do not plod along the same path just because the
directions say to go that way. Instead, in Maverick, we say, "Hmm, an
unforeseen problem. What should we do? Go on? Go back? Go around? ... "
Leadership cannot be taught. Quoting Dr. Huge Nibley,
the present time, Captain Grace Hoper, that grand old lady of the Navy,
is calling our attention to the contrasting and conflicting natures of
management and leadership. No one, she says, ever managed men into
battle, and she wants more emphasis on teaching leadership. But
leadership can no more be taught than creativity or how to be a genius.
The Generalstab tried desperately for a hundred years to train up a
generation of leaders for the German army, but it never worked, because
the men who delighted their superiors (the managers) got the high
commands, while the men who delighted the lower ranks (the leaders) got
reprimands. Leaders are movers and shakers, original, inventive,
unpredictable, imaginative, full of surprises that discomfit the enemy
in war and the main office in peace. Managers, on the other hand, are
safe, conservative, predictable, conforming organizational men and team
players, dedicated to the establishment.
The leader, for example,
has a passion for equality. We think of great generals from David and
Alexander on down, sharing their beans or maza with their men, calling
them by their first names, marching along with them in the heat,
sleeping on the ground and being first over the wall. A famous ode by a
long-suffering Greek soldier named Archilochus, reminds us that the men
in the ranks are not fooled for an instant by the executive type who
thinks he is a leader.
For the manager, on the other hand, the
idea of equality is repugnant and indeed counterproductive. Where
promotion, perks, privilege and power are the name of the game, awe and
reverence for rank is everything and becomes the inspiration and
motivation of all good men. Where would management be without the
inflexible paper processing, dress standards, attention to proper
social, political and religious affiliation, vigilant watch over habits
and attitudes, etc., that gratify the stockholders and satisfy security?
... Managers do not promote individuals whose competence might threaten
their own position, and so as the power of management spreads ever
wider, the quality deteriorates, if that is possible. In short, while
management shuns equality, if feeds on mediocrity... For the qualities
of leadership are the same in all fields, the leader being simply the
one who sets the highest example; and to do that and open the way to
greater light and knowledge, the leader must break the mold. " A ship in
port is safe," says Captain Hopper, speaking of management, 'but that
is not what ships were built for," she adds, calling for leadership...
True leaders are inspiring because they are inspired, caught up in a
higher purpose, devoid of personal ambition, idealistic and
incorruptible... So vast is the discrepancy between management and
leadership that only a blind man would get them backwards... "
is a common known practice when hiring and retaining software engineers
to retain what is referred to as the 10x performers. Maverick
Development requires the same from its Leadership. Managers that are
leaders will be 10x performers.
Trust is the key and here is what Mr. Semler had to say,
simply do not believe our employees have an interest in coming in late,
leaving early, and doing as little as possible for as much money as
their union can wheedle out of us. After all, these are the same people
that raise children, join the PTA, elect mayors, governors, senators,
and presidents. They are adults. At Semco, we treat them like adults. We
trust them. We don't make our employees ask permission to go to the
bathroom, nor have security guards search them as they leave for the
day. We get out of their way and let them do their jobs."
Development changes the traditional performance review process. The
goal is to truly review the performance of the team and the individual.
Each person's performance is reviewed by each member of the team and
this includes the team leader. There is no protection because of
hierarchy, title, or rank.
For more information read Maverick Reviews.
Maverick Development, compensation is public knowledge within the
company. Mr. Semler points out executives with high salaries should be
proud of their salary and confident that they are worth it. If they are
not confident they are worth their salary then they will be inclined to
For more information on compensation please read the section entitled "Compensation" in Maverick Hiring.
Maverick Development all meetings are to have an agenda that is known
before hand by all attendees. The agenda should be concise and if the
meeting is to be one of discovery then state that fact.
have attended to many meetings where some important and complex idea is
briefly presented and management unexpectedly asks for opinions and if
there aren't any they will move to the next topic. This unfair tactic
does not produce the proposed desire of gathering more information.
Prior to the presentation, management has taken as much time as they saw
fit to discuss the matter and formulate their solution. Then they come
to a meeting and blind side the entire team with a topic that is usually
nontrivial. Do they really expect valid input? It is like they are
saying, "Okay you have five minutes, be creative starting now!" Or is it
a paternalistic approach and they are satisfied with the stupid looks
and blank stares? This approach does not meet any goal and is not part
of Maverick Development. Present a decision as just that, a decision.
Present a proposal as that and give people time to prepare for
In Maverick Development there is never a
lunch meeting. Lunch meetings are rarely necessary. Downtime is
essential, and if you want to have a lively mind for the afternoon then
you may want to take a break at lunch. Do you ever feel that lunch
meetings are used to squeeze and extra hour out of the day? How often
have you attended a lunch meeting and then afterward you still take a
lunch break? Any manager that calls a lunch meeting is to be gibbeted in
an iron cage outside of the conference room for all to see just like
what happened to Captain Kid.
The 15-minute standup
from XP is sufficient to relay the current state of development. Details
are not necessary. If there is a problem, then the details will be
addressed by all those who can provide a solution to the problem. If
things are on track, no one cares to hear about each item and how "on
track" it is. State is the key, not details.
that some managers only produce two things that are their work products.
One is meetings, the other documentation. Maybe three things, the third
being carbon dioxide. The documentation becomes the information
presented in the meetings. These items include endless spreadsheets and
huge step-by-step procedures on how the process is to be maintained.
(Remember in the abstract the comment of the tail wagging the dog!) When
a manager is in a meeting he is working. A developer is considered to
be working when he is writing code. If a developer is in a meeting he is
not working. If a developer is filling out some status report for
management, he is not working. When they are doing these tasks, they are
doing management's work.
The goal for a status meeting
should be to determine the state of the project. Not to pour over
minute details. The XPM methodology states:
with the Product Manager and the Steering persons should be context not
content. Have the success expectations changed? Is there any change to
the scope/objectives? Have there been any changes in the
stakeholders/related projects? Are the benefits and cost assumptions
still relevant? Are benefits-realization plans still relevant? Has
quality been changed? Are there changes to project risk or risk
management issues? In other words, is the project still FOCUSED on the
right business outcomes?"
This is a method of
achieving the goal of reporting the state of the project. Since Maverick
Development is goal oriented, and any method that achieves the goal is
fine, then this or any other way to discover the state is within the
If there is not a goal, and there is not an agenda, then there is no meeting.
Maverick Development meetings that deal with details instead of status
are called by those who can do something with those details. Developers
call meetings on details to discuss problems in detail with peers and
individuals that can create a solution, not with people that conduct
meetings. When managers attend these meetings they try to do the tricks
of their trade such as injecting comments to stimulate thought and
regulating the conversation to facilitate communication. Stimulate,
regulate, and facilitate, and managers hate developers that can't
produce tangibles. Go figure.
Managers interrogate to
find the details of the state of the process. Leaders investigate and
recognize on their own. Interrogation is intrusive. It takes to much
time. It removes people from their real work.
reason documentation exists is because someone was trying to meet the
goal of communication. Documents are created so that things do not have
to be said over and over. Documents state things that are fixed, wrote
down, and locked in state, thus the term statement.
documentation does not meet the goal of communication then the document
should not exist. If the document is not read then it clearly has not
communicated anything and it should not exist. How many of you have ever
put some comment in your weekly report to see if anyone was actually
reading them? I have and many times I have not received any response
from my manager. Clearly that report was not necessary. When my manager
is busy doing what he really thinks is important he is not reading fifty
status reports. Since in Maverick Development reviews are not done
solely from the top down it cannot be said that status reports are
required for the review process.
One approach that achieves the goal of conveying weekly status is just these three statements: on target, concerned, in trouble.
the state is anything but on target, managers will then ask the obvious
question, "Are you taking the appropriate action to rectify the
situation?" How many times have managers pointed out the obvious?
Maverick development there is one place to report status. There is not a
team dashboard, and a textual weekly report, and a management
spreadsheet. There is only one of these items, or some other single
reporting method. If someone wants to generate a new report including
other items, then they are responsible for the format and media
difficulties, they cannot make this an assignment to others. The tail
can't be allowed to wag the dog.
For more information on documentation please read Maverick Documenation.
Release dates in Maverick Development are based on real goals and supported by development.
Maverick Development there is no "release date by trade show". How many
times have you had a release date for a product set by product
marketing to be for the next trade show? Where is the basis on that
choice? They will tell you that if you miss that date we will miss our
window of opportunity. The ability to market and display a product at a
trade show is advantageous. However it is not sufficient to require the
release of a product based on date alone. If the original time
estimation for the set of features places the release date beyond the
trade show then the feature set must be changed to correspond with the
shorter development time.
Release dates are based on
budget, features, quality, and timing. Product management will always
try to set the date for release, the number of features in the release
and the quality for that release. Well, everyone knows they can only
choose two of the three. The third is always a variable.
development says that there are only two choices that are variable and
that third, quality, is fixed. You can pick the features or the release
date, but not both. Quality is always set at high. That never changes.
No one will accept low quality. If you relax quality then you have
opened the door for your competition to take your market share by merely
reproducing the same functionality with improved quality. You have to
have high quality so that the purchase decision is based on such things
as features and solutions.
In Maverick development a
goal can be set to reach a certain point in the production by a certain
date. Since Maverick is based on real work, and doing work right, then
it really doesn't matter if the date is not met. Work has really
occurred and real progress was made. Goals are essential for a target.
If the goal is to far out there, it is hard to hit. Should we get down
on ourselves if we miss it? No. Why not? Because Maverick Development
produces real work every day. In reality software is done when it is
done. The scope can be reduced and thus you can be done sooner. Maverick
Development supports these simple facts. Motivating development to work
hard every day is the responsibility of the leaders of the development
Maverick Development is goal driven, the goals for the development
process are essential. If a goal for development is to respond quickly
to changing requirements then an agile development methodology could
meet this goal. If you were trying to get awarded a government contract
then a process based on SW-CMM could be the means to the end. Now, one
would say, Maverick development is really nothing. It doesn't tell me
how to do anything. The point is, you should be a computer scientist and
you should already know how to do something or have the ability to
learn how to do something for your current situation.
development opens the door for leadership and for understanding why
methods exist and the real issues the method addresses. I am currently
working on a Maverick Development Methodology for Agile Development.
Maverick Development Model means there is a goal to be reached. If the
method is not the right one you are not stuck. If you are not
progressing, or if there is something missing, or whatever the problem
is, Maverick says you must do something about it. You must apply your
skills as a computer scientist. You must apply your leadership
capabilities. Maverick Development Model addresses the issue of people
that live by the letter of the law and not the spirit. It is out of the
box thinking, but it shows that you have to have skills, knowledge, and
understanding to survive out of the box.
Development demands that people are worth their salt. Reviews are going
to happen and you had better be ready. Managers are not protected and
leadership is the goal. Managers cannot promote mediocrity in Maverick
Development. Managers cannot protect themselves from the movers and
shakers. They cannot create empires. They cannot reward those whom
please them and remove those whom do not.
Friday, May 11, 2012
Unit Testsby Geoffrey Slinker
v1.2 March 24, 2006
v1.1 March 5, 2005
v1.2 April 15, 2005
AbstractHow do you view unit tests? Is a unit test is simply a test to verify the code? But what is the purpose of having unit tests? Maybe unit testing is a design method. Maybe a unit test is a programmer test. Maybe unit testing is a diagnostic method. Maybe unit tests are a deliverable in a phase of development. The varied purposes and uses of unit tests lead to confusion during discussion.
Testing DefinitionsDefinitions are taken from : http://www.faqs.org/faqs/software-eng/testing-faq/section-14.html. Emphasis added.
The definitions of integration tests are after Leung and White.
Note that the definitions of unit, component, integration, and integration testing are recursive:
Unit. The smallest compilable component. A unit typically is the work of one programmer (At least in principle). As defined, it does not include any called sub-components (for procedural languages) or communicating components in general.
Unit Testing: in unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation.
Component: a unit is a component. The integration of one or more components is a component.
Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.
Component testing: the same as unit testing except that all stubs and simulators are replaced with the real thing.
Two components (actually one or more) are said to be integrated when:
They have been compiled, linked, and loaded together.
They have successfully passed the integration tests at the interface between them.
Thus, components A and B are integrated to create a new, larger, component (A,B). Note that this does not conflict with the idea of incremental integration -- it just means that A is a big component and B, the component added, is a small one.
Integration testing: carrying out integration tests.
Integration tests (After Leung and White) for procedural languages.
This is easily generalized for OO languages by using the equivalent constructs for message passing. In the following, the word "call" is to be understood in the most general sense of a data flow and is not restricted to just formal subroutine calls and returns -- for example, passage of data through global data structures and/or the use of pointers.
Let A and B be two components in which A calls B.
Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab: The tests in A's suite that cause A to call B.
Tbsa: The tests in B's suite for which it is possible to sensitize A -- the inputs are to A, not B.
Tbsa + Tab == the integration test suite (+ = union).
Note: Sensitize is a technical term. It means inputs that will cause a routine to go down a specified path. The inputs are to A. Not every input to A will cause A to traverse a path in which B is called. Tbsa is the set of tests which do cause A to follow a path in which B is called. The outcome of the test of B may or may not be affected.
There have been variations on these definitions, but the key point is that it is pretty darn formal and there's a goodly hunk of testing theory, especially as concerns integration testing, OO testing, and regression testing, based on them.
As to the difference between integration testing and system testing. System testing specifically goes after behaviors and bugs that are properties of the entire system as distinct from properties attributable to components (unless, of course, the component in question is the entire system). Examples of system testing issues: resource loss bugs, throughput bugs, performance, security, recovery, transaction synchronization bugs (often misnamed "timing bugs").
What are your Goals of Unit TestingThe obvious goal of unit testing is to deliver fewer bugs. But what type of bugs? Unit bugs? Maybe integration bugs? Possibly design bugs? Maybe it is not a bug hunt at all, maybe you want to use unit tests to show valid uses of a unit or maybe you are using unit tests to drive the desing process.
But maybe you have a non-obvious goal for your unit tests. If you do, then you must specify the goal when you discus unit testing or there will be confusion.
It is obvious that unit tests can uncover unit bugs. But can unit tests uncover integration bugs? If a unit test is ran in complete isolation it doesn't seem possible. Suppose in the system someone changes a method by removing a parameter. All of your unit tests will fail to compile. (Unit tests can fail at least two ways, 1. Compilation failure. 2. Runtime assertion) This is an opportunity to bring the people involved with the integration point to discuss the changes and their ramifications.
Are You Ready for Full Unit TestingLet's repeat the definition of unit tests from above.
Unit Testing: in unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation.
The development of stubs, simulators, trusted components, drivers, and super-components is not free. If unit testing is introduced midstream in a development process it can literally reroute the stream. Developers might lose the velocity they currently have on some feature and thus disrupt the flow. Introduction of unit tests midstream must be justified and accepted. To understand how difficult this may be take all of your current development projects and go to the Planning Committee and tell them that all of the predetermined dates are now invalid and the dates will need to be shifted "X" months into the future. Maybe you should have someone under you deliver the news to the Planning Committee! It's always nice to give a more junior person presentation experience!
Test data generation is an expense that many people do not realize goes with unit testing. Imagine all of the units in your software system. These units "live" at different layers. Because of these layers the data it takes to drive a high level unit is not the same as the data that it takes to drive a low level unit. In the system high level data flows through the system to the lower levels and during its trip it is mutated, manipulated, extended, and constrained along the path. All of these "versions" have to be statically captured at each level in order to "feed" the unit tests.
Test data that comes from or goes into a data base store present their own difficulties. Setting up the database with test data and then tearing down the database after the tests have completed are consuming tasks. If your unit of code writes to a database then it will not have the functionality to delete from the database. So, the tearing down of the database has to be done externally to the tests.
Can You Afford Not to Unit TestSince unit testing is expensive then some might say it is not worth it to do. That statement is too general and not prudent. One should say, "Unit testing is expensive, but it is less expensive than ...". This means you have clear goals and expectations of unit testing. For example, suppose that it took you six weeks to finish changes that were needed because of bugs found when the Quality Assurance team tried to use your code. Suppose that these bugs were some of the simple types such as boundary conditions, invalid preconditions, and invalid post conditions. Six weeks might not seem that expensive but let's examine this further. During these six weeks QA was unable to continue testing beyond your code. After they get your code the finally get into another set of bugs that were "below" you. That developer is now six weeks removed from the development work and will have to get back into the "mindset" and drop whatever he was working on. Also, the precondition bugs you fix will cause exceptions to be caught upstream which will cause code changes by those that call your code which will mean that those "up stream" changes will now have to be retested. If the upstream changes actually change data that flows down to you it will affect those that you call with that "new" data and may cause changes to be made down stream. Is it sounding expensive not to do unit tests in this imagined scenario? It was supposed to!
Unit Testing as Clear (White) Box TestingThis is probably the oldest and most well known role of unit tests. Typically a person other than the author of the code writes tests to verify and validate the code. Verify that it does the right thing and validate that it does it in the right way. Often the right way is not specified and the tests are just simply verifiers. One of the driving principles behind clear box testing by another party is that the developer is so close to the code he can not see the errors and can not imagine alternative uses of the code. However, these alternative uses of the code is often met with a complaint from the developer saying, "Your test is invalid. The code works correctly for the uses that the system will require."
Unit Tests as Usage ExamplesAnother role that has been filled by unit tests is to provide an example of how to use the unit. Unit tests that show the proper usage and behavior of a unit are a verification activity. These unit tests might better be termed usage examples but are usually lumped into the term unit test. These usage examples will show how to call the unit and asserts the expected behavior. Both valid and invalid paths are shown. These usage examples are used to verify the old term "works as designed." This type of unit test will qualify many bugs found by the QA team as "works as designed." These tests do not validate the design but verifies the current implementation instance of the design.
Unit Tests as Diagnostic TestsA unit test or set of unit tests are often used in the role of diagnostic tool. These tests are run after changes to the system to see if the verified uses of the system are still valid. If a change to the system causes a diagnostic unit test to fail it is clear that further investigation is needed. Maybe the change has altered the behavior of the system and the tests need to be updated to reflect this, or maybe the changes did not consider side effects and coupling and are flawed.
Unit Tests as Programmer/Developer TestsMore and more often unit tests fill a role in programmer tests. Programmer tests is a term that comes from the eXtreme Programming community. On the C2.com Wiki the following definition of Programmer tests is given:
(Programmer Tests) XP terminology, not quite synonymous with Unit Test. A Unit Test measures a unit of software, without specifying why. A Programmer Test assists programmers in development, without specifying how. Often a Programmer Test is better than a comment, to help us understand why a particular function is needed, to demonstrate how a function is called and what are the expected results are, and to document bugs in previous versions of the program that we want to make sure don't come back. Programmer Tests give us confidence that after we improve one facet of the program (adding a feature, or making it load faster, ...), we haven't made some other facet worse.
Because programmer tests demonstrate usage it fills some of the role of usage example. Also the programmer tests are used to make sure previous bugs do not come back which is part of the role of diagnostic tests.
These programmer tests rarely stand alone as I have described them. They are used in Test Driven Development.
Unit Tests as a Part of Test Driven DevelopmentUnit tests are used in the Test Driven Development (TDD) methodology. This is not part of testing or part of quality assurance in the traditional sense usually defined along departmental boundaries. This is a design activity that uses unit tests to design (with code) the interfaces, objects, and results of a method call.
"By stating explicitly and objectively what the program is supposed to do, you give yourself a focus for your coding." Extreme Programming Explained, 2nd ed., Beck, p50.
By showing what what a program is supposed to do you have given a usage example.
"For a few years I've been using unit testing frameworks and test-driven development and encouraging others to do the same. The common predictable objections are "Writing unit tests takes too much time," or "How could I write tests first if I don’t know what it does yet?" And then there's the popular excuse: "Unit tests won't catch all the bugs." The sad misconception is that test-driven development is testing, which is understandable given the unfortunate name of the technique." Jeff Patton, StickyMinds Article.
TDD is about testing close to code changes. This provides a type of isolation (unit tests are performed in isolation) related to change. If you make hundreds of changes to your unit and then check it in and run your unit tests how do you know which change or changes caused the failure? Changes often have strong coupling and it makes it difficult to figure out which change is the problem. If you change one thing in the code and then run your tests you can diagnose any problems because they will be isolated to that change.
If you are working on existing code and you wish to start unit testing you
are in a difficult spot. If you are going to test your code in isolation then
creating drivers, stubs, and simulators to use with your unit tests could be
overwhelming in an existing system. But as with everything, do some studies,
some analysis and figure out what is the best approach for your situation. A
divide and conquer approach will typically help out. Take small bites and chew
slowly or you will never eat the entire elephant!
Adding Unit Tests to an Existing System
A typical approach is that any new code that is developed will have unit tests supplied with it as well. Some of the difficulty to this approach lies in the fact of which layer you unit exists. If you are a mid-tier unit then you have to create a driver for your unit. This driver may not be complicated but it must reflect some state of the real object that it proxies for. Also, you will have to create stubs for the lower level units that you call. The stubs could be standing in for existing units that have fairly sophisticated behavior with legacy coupling issues and known but painful side effects.
Any tier of code other than the top tier can get caught in a vicious "down stream" flood. Suppose you have developed a lower tier unit of code. You have created drivers for your unit which use test data that you have generated to feed to your unit. Suppose someone upstream adds three integers to the data object that drives your class. Just to test the conditions of the upper and lower bounds and one valid value you may have to add 27 data objects to your set of generated test data. Therefore, down stream situations must be considered.
If your unit calls several methods that manipulate the data in a pipeline fashion it increases the difficulty in creating stubs. For example if you call Method X with object A and it returns a modified object A which we will call A' and you pass A' into Method Y and it returns A'' and you pass A'' into Method Z and it returns an A''' then each hard coded stub for X, Y, and Z must behave correctly for all of the variations of A that is used in your test suite (note this is true for an existing system or for a newly developed system).
If you decide to use programmer tests as your definition of unit tests then you do not have to develop all of the drivers, stubs, and simulators that are required to test the code in isolation. In an exsiting system write some unit tests that state simply and explicitly how the system is currently behaves and then start to make changes (refactoring changes) to the existing code. When adding new units of code to the existing system you may take the TDD approach from this point moving forward. The choices are yours to make.
But enough doom and gloom. I think it is understood that this is not a trivial task. The issue is going to be on deciding how many unit tests are enough.
Issues After You Have Unit TestsAfter you have unit tests in place there will be issues that arise because of their existence. What do you do when someone breaks someone else's unit tests? What do you do when QA finds a bug that could have been found if there had been more complete unit tests? What do you do when there is a system change that causes a chain reaction of changes in all of the test data?
These are all issues that easily escalate into the "blame game". That will not help team work or morale. Even though it is justifiable it is not wise to allow the team that works on the "core" technology push around the supporting teams. In those situations segragation of code begins to occur. Pretty soon you have the situation where no one wants to work with the "core" team and the "core" team doesn't want to call any one else's code and the problems begin to increase.
So, you have to decide how you will constructively handle the opportunities that will arise. For example, someone makes changes that break someone else's unit tests. This opportunity can be viewed as good. It is good in that the "breakage" has been found and has been found quickly. That is a good thing. It is good in that we know which persons are involved in resolving the issue, the breaker and the breakee. That is a good thing. The two involved can get together and figure out the best solution for the business. Best solutions are good things. So, in this example, many good things occurred.
If you do not take advantage of these opportunities it could go something like this. Joe broke Sally's unit tests. Sally goes to Joe and says, "These unit tests represent a usage contract. You have violated the contract. Change your code." Joe says, "Don't be ridiculous. Your code doesn't do what it should and your unit tests are bogus. Update your code to work in the new world!" Sally says, "I don't have time to deal with your inability to work within a team environment. You fix it." Joe says, "ME, ME not working as a team player. It is you! Gosh!" I think you get the picture.
ConclusionMake it clear to your development team what definition of unit tests is being used. Understand that unit testing is not free and the expense increases with the amount of test data needed and the management of data bases that are involved. View failed unit tests as a good thing. The earlier you know a unit of code fails the cheaper it is to fix.
After you have defined your defintion and use for unit tests pick up a book on the subject, summarize it, and make it available to the people involved. Clear communication is always a problem (communication is a topic in every development methodology and business guide that I have studied) and getting people to agree on terms and usage will eliminate many wasted hours in meetings.
Develop Everything the Second Time First
an Extreme Maverick Approachby Geoffrey Slinker
v1.1 DRAFT March 2006
v1.0 DRAFT October 2004
AbstractApplying the Maverick approach using the Extreme premise that if something is good then we will do it all the time or do it to the extreme the idea of developing a software product the second time first came to mind.
If a piece of software is always better the second time we develop it then we will always develop the software the second time.
IntroductionAfter studying the Extreme Programming methodology it inspired me to consider other things that can be taken to the extreme. Being a Maverick follower I said to my self, "Self, why aren't other good practices of development taken to the extreme?"
I have been a student of product estimations for many years. It seems commonly accepted that when estimating the cost and time of a project you must have prior experience that is directly applicable to the new task. Otherwise estimations can be off 300% in either direction.
After writing a new product if the development team was asked how long it would take to develop the product they just developed then they would be able to give you a very accurate estimate that excluded the mistakes of the original effort. Often I have heard, "If there was just time to write this code over again I could do it right." Right means a lot of things here but I feel it boils down to the fact that right means that the correct domain model has finally been recognized and it would be great if the team was allowed to refactor or re-implement the product.
So, if writing things the second time produce products that are truly superior than the original then the first thing we should do is write the product the second time!
Second Time all the TimeSince coding a project the second time can give great improvements in performance, coupling, cohesion, and overall domain model then why don't departments just plan on re-writing the product and only ship the second release? Because that sounds expensive. That sounds as if better planning had occurred we wouldn't want to re-write the system. That sounds as if we had hired better people they could have done it right the first time. That sounds like something went terribly wrong.
The truth is that it could be a sign of all of the things above and it could also be a sign of a product that had a scope that was not definable until some experiments had occurred.
Prototyping is the essence of this idea. I am not saying this is a new idea! But prototyping sounded expensive and since the company always wanted to ship the prototype rapid prototyping emerged. The term rapid was used to discourage the company from wanting to ship the prototype. Because of this desire to ship prototypes and the understanding from the development that prototypes were very useful the development team had to undermine the quality of the prototype (at least by name and often by quality) to keep the company from shipping the prototype.
In the past few years other terms have arisen that refer to the idea of writing something first so that the second version can be developed. Terms such as a "one off", "a thin thread", and "a spike". These obvious prototypes or experiments are tools that are commonly used. There names may exist to avoid the conversations with management to justify prototypes.
Unit Testing is PrototypingAs I considered the fact that development continued to prototype and experiment during software development I thought about how unit testing plays a role.
I propose this description of unit testing:
Unit testing is prototyping the interface and any stubbed or mocked components necessary to define the expected behavior. The unit tests are refactored until the desired behavior is represented and the interface is correct for the domain. Then you write code to get the tests to succeed.
Isn't that a form of prototyping? Isn't it a way of writing things the second time? Isn't that what Test-First-Design is all about? You are coding up the interfaces before they are implemented. You are thinking about those interfaces and changing them to meet the idea of a fluid design. You are running scenarios through your mind anticipating issues. Mocked objects maybe considered oversimplified prototypes. Stubbed methods are definitely simplified to the extreme.
The Future of PrototypingThe benefits of writing something first as an experiment and then writing it a second time for production have been known for many years. How can these benefits be used more efficiently? Ignoring the various names of current activities that are centered on the old term prototyping there should be more use of these activities without the fear of shipping the prototype. Maverick development is concerned with trust. Trust your engineers to know that a prototype, experiment, or spike is necessary and trust them to take the lessons learned and apply those to a truly production quality resultant.
ConclusionIf allowed to re-write a piece of software it is generally accepted that the second version is superior to the first in a myriad of ways.
Since writing software the second time is better, we will write the software the second time at first.
Prototypes, rapid-prototypes, one-offs, thin threads, experiments, spikes, and unit tests are all methods that address the idea of writing things the second time give better results.
Scouting and Reconnaissance in Software Developmentby Geoffrey Slinker
v1.0 October 2004
v1.1 January 2005
v1.2, v1.3, v1.4 July 2005
v1.5 March 24, 2006
AbstractScouting and reconnaissance are two well known methods of discovery. By these means information and experience are gained when faced with the unknown. Experience is critical to writing good software. Experience allows you to correctly identify problems and address them. Scouting and recon for software development is a great way to gain experience and avoid the pitfalls of the unknown.
IntroductionIn the well known book ‘The Mythical Man-Month’ Frederick P Brooks states:
Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow.As the years passed and systems grew in size and complexity it became apparent that building a "throw away" as not the most efficient approach. In the 20th anniversary edition of his same book, Brooks states that developing a throwaway version is not as efficient as iterative approaches to software development.
In Extreme Programming Explained Second Edition, Kent Beck states:
"Defect Cost Increase is the second principle applied to XP to increase the cost-effectiveness of testing. DCI is one of the few empirically verified truths about software development: the sooner you find a defect, the cheaper it is to fix it."Scouting and recon techniques are used to discover defects through experiments and to completely avoid the presence of the defect in the "real" software. These techniques work within phased or phasic development methodologies as well as within iterative methodologies and give knowledge and experience through their use.
Gaining ExperienceThere are many software development activities concerned with gaining experience. Some of these activities include creating proofs of concept, prototyping, and experimenting. I will refer to all of these activities as experiments.
How much effort should be placed in an experiment? Enough to gain the experience needed to get you to the next step.
Software Scouting“Scouting” will be the metaphor. During the exploration of the American frontier, scouts were sent out ahead of the company to determine the safest path through unknown and hostile territory. Through software “scouting missions” one can save time and money, and reduce the risks to the company.
Brooks’ first statement concerning building a "throw away" is akin to exploring the entire route first and then moving the company. His revised statement concerning iterative development is akin to scouting out a few hours (or days) ahead and returning to guide the company. This pattern of short scouting trips would continually repeat, making the technique both iterative and incremental. Through the scouting metaphor you can gain a certain feel for why building a "throw away" version is more costly than iterative development.
Scouting ToolsThere are many ways to explore the unknown. These activities have many similarities. One of the key differentiators is the stage of software development in which the activity occurs. Following various "tools" for scouting are defined and the stage in which they are typically used is specified.
A "Proof of Concept" occurs after a solution has been conceptualized. Investigation is needed to gain confidence and verify the viability of the solution.
A "Prototype" is made after a design has been made. Investigation is needed to validate that the result of the design solves the problem. In software prototyping development activities are scaled back. In engineering prototypes may be scaled functioning models. In software there is no physical dimension so development activities are scaled back which include minimal effort for robustness and usually only implementing the “happy path” of the functionality. Also techniques to reduce coupling are skipped and cohesion is ignored as much as possible (Even though these activities are skipped the experience of prototyping bring to light how the software components should be coupled and an overall domain definition emerges that allows for better cohesion).
Ed Mauldin explains prototyping as thus:
“Prototyping is probably the oldest method of design. It is typically defined as the use of a physical model of a design, as differentiated from an analytical or graphic model. It is used to test physically the essential aspects of a design before closing the design process (e.g., completion and release of drawings, beginning reliability testing, etc.). Prototypes may vary from static "mockups" of tape, cardboard, and styrofoam, which optimize physical interfaces with operators or other systems, to actual functioning machines or electronic devices. They may be full or sub-scale, depending on the particular element being evaluated. In all cases, prototypes are characterized by low investment in tooling and ease of change.”
An "Experiment" occurs after software modules have been developed. Investigation into their behavior under varied conditions is needed. An experiment is conducted to observe the behavior.
A "Mock Object" is created during software implementation. Components have been developed and investigation into their behavior needs to be done. To isolate these components from the effects of other components the other components are replaced with "mocks" that have simple and specific behavior.
A "Driver" is created during software implementation. Components have been developed and investigation into their interfaces and usability need to occur. A driver is developed to interface with and drive the component. The interfaces or entry points of the components are confirmed correct and the pre-conditions of the components are exercised. The driver can validate the post-conditions of the component as well.
A "Stub" is created during software implementation. Functionality has been developed and investigation of the code paths needs to occur. Called interfaces are developed with the simplest means in order to return specific results and exercise the code paths of the caller. These simple interface implementations are stubs.
A "Simulation" is typically created after the system is implemented. A deliverable needs to be tested in various environments and conditions. A simulation of an environment is developed and it is used for testing. Common examples are simulated users, simulated load, simulated outages, and such.
When to ScoutRemember, scouting activities address the issue of gaining experience in unknown territory. These activities are not necessary when experience is present. Simply said, “If you know how to do the job, then do it!”
When one is in unknown territory scout ahead for information, then come back and apply the knowledge gained. Have enough discipline not to get distracted by the sights along the way. Stay focused, travel light, and get back to camp as quickly as possible.
Can you afford not to scout ahead? The answer to this question only comes at the end of the journey. Did you make it to your destination or not?
Scouting for Phasic MethodologiesOne reason that experiments work is because they address issues and concerns in context and as they occur. It is a "learn as you" go approach. Below are some scenarios in which scouting can be used in a traditional phased or phasic methodologies.
Phase 1: Analysis and Requirements.• Paper prototypes of the user interface.
• Proof-of-concept of a requirement (i.e. the database must support 500 simultaneous connections).
Phase 2: Design.• Refined paper prototypes of the user interface. Paper models of the architecture and model (i.e. UML).
Phase 3: Implementation.• Develop an experiment for the “happy path” to discover boundaries and interfaces.
• Create prototypes ahead of implementing frameworks so that the framework's approach can be reviewed.
Phase 4: Testing.• Create experiments to test scenarios.
• Create testing harnesses that allow for proxy users (a proxy user can be a user simulated by a computer program).
• Simulate extreme conditions such as system load.
(Testing is scouting ahead of the user to make sure the user’s experience will be a good one.)
Scouting for Iterative Methodologies
- Create a proof of concept to verify the User has conveyed their desires.
- If the user story involves a User Interface, create paper prototypes of the interface to stimulate user input and direction.
- Create a prototype to identify dependencies to facilitate iteration planning.
- Create design prototypes using a modeling language such as UML.
- Create stubs, drivers, and mock objects to increase confidence in the behavior of isolated units.
- Create an experiment to observe object behavior.
- Create a simulation to test things like performance under a heavy load.
This list is supposed to be thought provoking, not complete. The idea behind scouting is to perform some scouting activity when faced with the unknown. When doing experiments in conjunction with an iterative development methodology the experiments are "lighter" than they would be in a phasic development methodology if the customer/user is taking an active role. With the customer present one can prototype a user interface with a white board and some drawings. If the customer is not present then a prototype for a user interface is usually mocked up with some kind of computer aided drawing package or even a "quick and dirty" user interface is developed with a GUI building tool or scripting language.
Benefits of Scouting
- Scouting brings light to a situation. Through scouting activities estimations become more accurate. The accuracy comes from the application of the experience, not from an improved ability to predict the future.
- Scouting reduces coupling and improves cohesion. When writing software in the light of experience, the coupling between objects is reduced and the experience unifies the system's terms and metaphors which increase cohesion.
- Scouting builds trust and confidence by eliminating incorrect notions and avoiding drastic changes in design and implementation.
Risks of Software Scouting
- Is management mature enough to allow the proper use of an experiment and not try to “ship” the prototype and undermine the effort?
- Is development mature enough to refrain from features creeping into the product because the experiment revealed something interesting?
Project Management Ensures Adequate Software ReconProject Management should scout and see if their development environment can support activities that rapidly gain experience. Probing questions include:
- Are the software developers aware of all of the activities that can lead to experience?
- Are the stakeholders aware of the benefits of prototypes and experiments?
- Is everyone aware of the risks of not doing recon and the risks of doing recon? Remember, one of the risks of a prototype is that sometimes people try to ship it!
- “If I just had time to write this right”
- “I don’t think we know how difficult this is going to be”
- “I really don’t have any idea how long this is going to take”
ConclusionExperience is key to writing good software. The sooner you discover a problem and correctly fix the problem the cheaper it is. Scouting ahead in software by using prototypes and experiments is a great way to discover the right path without risking the entire company to the unknown.
Reporting for Accountabilityby Geoffrey Slinker
March 24, 2006
April 22, 2005
July 1, 2005
AbstractReporting and accountability are essential for business processes. Without those budgets can not be calculated, resources can not be utilized efficiently, as well as many other issues. Reporting has come to a level where one can truly do "more with less." Daily stand-ups, end of cycle reporting, damage charts, dashboards, and burn charts accurately and concisely disseminate information.
IntroductionOften the challenge of doing "more with less" is extended to teams as a catalyst for thought but many readily accept that generally the paradox is not possible. In the area of reporting of status and progress in software development the paradox is true. This is accomplished through the use of less technology than has previously been used and also the acceptance of new mediums for documenting and reporting.
The goal of reporting should be to accurately report the subject matter concisely and precisely. To facilitate that reporting is done often the mechanism should be easy to use.
The practices that will be used for reporting are project planning, release planning, iteration planning, 10 minute stand-ups, end of iteration reflections, end of release reflections, end of project reflections, and information radiators or dashboards.
The information radiators or dashboards include damage charts, burn charts, and project status charts.
Ten Minute Stand-UpThis meeting should be held early and it sets the tone for the day. The meeting should be held in proximity of the information radiators so that everyone can see what they are supposed to be doing.
Some questions that can drive the stand-up meeting are:
1. Have the expectations for success changed?
2. Has the scope of current tasks changed?
3. Have there been any changes in related or dependent projects?
4. Has the priority of any remaining tasks changed?
5. Are there changes to any risk factors?
6. Do you feel the project is focused on solving the right problems?
From these questions you should receive yes or no answers. Any no answers should be noted and addressed afterwards with the right set of people.
Project PlanningIn the 20th anniversary edition of the Mythical Man Month Brooks states that iterative development has been shown to be more efficient than previous development approaches.
I will not go into the details and benefits of iterative development and will assume that it is generally understood to be superior to strict "phased" approaches. I have elaborated on this topic on the paper "Software Scouting and Recon".
At this point you begin the wheels turning as described in "Design By Use Development" (DBU). The project requirements are fed into the DBU process. DBU uses this information to identify subsystems and boundaries. This additional information helps in the organization and scheduling. The information from DBU is combined with the project requirements become "User Stories" which are organized for each sub-release of the project.
Release PlanningThe user stories identified as part of the project are now divided into small related groups that will be developed during the iterations of each release. The user stories are fed into the next part of DBU and the usage examples are created. Notice that development starts early but with sufficient information to be headed down the correct path.
Each use case has a priority determined by dependencies, complexity, and an estimate for completion.
The use cases with their usage examples are fed into the iteration planning.
Iteration PlanningAll of the use cases and usage examples for the release are taken and organized and placed into a particular iteration. When an item has been finished the actual time for completion is recorded next to the estimated time.
Completion means that the usage example runs without any errors and implies that the usage example truly represents correct usage and all contracts (pre and post conditions and invariants) have been satisfied, and that the completed item has been integrated with any piece that is completed and awaiting its use.
If this is not the first iteration then recommendations from the previous iteration reflection are considered.
End of Cycle Reports / End of Iteration ReflectionsAt the end of iteration comments and congratulations are given concerning the completed items. Any items there were not completed are discussed and dependencies and release schedules are update.
End of Release ReflectionAt the end of release comments and congratulations are given concerning the completed releases. Any items there were not completed are discussed and dependencies and project schedules are update.
End of Project ReflectionAt the end of the product comments and congratulations are given concerning the project. The key here is to celebrate the hard work and the finished product. If scope had to be changed or dates had to be altered for the project the information concerning those items should have been recorded in the end of iteration and end of release reflections. This is a time to take pride in the hard work so don't dampen any spirits. At the next Project Planning meeting the lessons learned can be discussed and improvement activities can be identified.
They say that each project fails one step at a time but also each success builds one step at a time. Let's take the positive look on things and remember to allow people to have success.
Information RadiatorsInformation radiators are placed where they can be easily seen by those who are concerned with the project's information.
Information radiators are low tech for a reason and that is ease of update. Information radiators should be very active and therefore necessitate easy change. They should be well organized and easily captured though the use of a high resolution digital camera. The digital photograph is the medium for recording. Remember if it wasn't recorded it didn't happen and notice how easy it is to record. That is like having your cake and eating it too!
There are certain types of charts or layouts that convey information rapidly that are considered essential for each project. A description of each type follows.
Alistair Cockburn described it to me at a users group meeting something like this: "In the web browser market the race between Netscape and I.E. was concerned with getting anything to market. The damage caused by not planning was not the concern. The key was getting anything into the market. Trying to make the best web browser didn't matter. Software concerning the space shuttle is completely different. The damage caused by insufficient planning could be extremely expensive and cost lives."
Other types of damage plots can be created. Damage for a missed "release window" could be plotted but this type of chart must be very accurate or you will not be able to motivate people after wolf has been cried too many times. Estimating the damage for missing a window of opportunity is similar to the inaccuracy in estimating when code will be finished. The similarities should cause development and marketing to appreciate the difficulty of each other's tasks and alleviate some of the bickering and finger pointing. Holding marketing to their estimates should be done with the same rigor that development is held to their estimates or in other words "play fair".
Burn Up and Burn Down Charts
These charts show progress. The key is determining the units for the Y-axis. There are many reports on these charts and one should study them to determine how you will do them. Here is an excellent reference: http://alistair.cockburn.us/crystal/articles/evabc/earnedvalueandburncharts.htm
In this example burn down chart it shows that the project estimated the completion of 100 features in 8 weeks. The chart shows that it took 10 weeks to complete the features. It also shows that the project was in trouble significant trouble at four weeks and that the team got the project back on schedule on the fifth week but was unable to maintain the velocity.
These charts should be updated and posted with any information radiator that is related to or concerned with this information.
Project Status ChartsThese are those charts on butcher paper or white boards covered with sticky notes.
In this progress chart each line represents a developer and the use cases for this iteration. Each yellow box represents a sticky note that will have a short description of the use case and an estimate for completion. The sticky note moves from left to right showing the representing the percent completion. When the use case is finished it is placed on the right with the actual time for completion noted next to the estimate.
This progress chart shows the entire product. The thick line represents the product boundary. Every use case above the line will be included in the product. Every use case below the line are things that were considered but haven't made it into the product at this time.
The use cases labeled "Release Sets TO DO LIST" are the use cases grouped by release. The current iteration at the top shows the release set that is currently being developed and are in progress. The finished uses cases are those that were completed during the previous iterations. The green arrows show the flow from the left side or to do side to the current iteration and from the current iteration to the finished side.
Each use case has a time estimation and the total of all of the uses case estimations form the "to-do" area, the "in progress" area, and the "finished" area the total time to develop the product. If an item is moved from the "use cases not currently in the product" into a release set then an item from the release sets with an equal or greater estimation time must be removed to keep the project on schedule.