Monday, January 21, 2008

Can refactoring code ever be bad?

Definition:
Refactoring is a disciplined technique for restructuring an existing
body of code, altering its internal structure without changing its
external behavior. Its heart is a series of small behavior preserving
transformations. Each transformation (called a 'refactoring') does
little, but a sequence of transformations can produce a significant
restructuring. Since each refactoring is small, it's less likely to go
wrong. The system is also kept fully working after each small
refactoring, reducing the chances that a system can get seriously
broken during the restructuring.


---

Suppose there is a piece of C# code that you wish to use. Let's assume
that you have the option to:
1) Add the source file containing the code into your project.
2) Add a class library which contains the functionality to your
project.
3) Copy the piece of C# code and put it into your existing project.


Lets imagine some scenarios with each choice:
1) After adding the source file to your project it is refactored and
now internally it calls a routine in another class library. Now you
have a dependency and coupling to the other class library.

2) After adding the class library to your project the class library is
refactored and there is a new dependency on an RPC type call. You now
have a networking/rights dependency to the remote functionality.

3) After copying the piece of C# code the original changes. You are
not
coupled the the changing implmentation, as a matter of fact you have
no
external coupling at all.

Scenario (2) is the most common in my current work and it is a
problem.
If a method is refactored and it now makes an RPC or WebServices call
and my application exists outside the firewall that protects these low
level back end services then there are problems. You not only couple
to
implementation that the code represents you couple to the
implementation of the system configuration.

Suddenly copying and pasting code has a renewed appeal.

Before one lectures me on the problems and smells of duplicated code I
am aware of them. What if there is a bug in this copied code, then you
have to trace down everywhere the bug exists which is nearly
impossible? Well, there are no bugs in the code. It was well tested
and
found to be bug free. What if there is an optimization that would
greatly improve performance? Performance is not an issue for this
code.
If it becomes an issue for this particular use then it will be
optimized. Just because it is not "fast" enough in one application
doesn't mean it isn't fast enough in another. Maybe the optimization
would increase resource utilization and that is something "I" don't
want in my area.

These scenarios are the results of a discussion concerning the con's
of
shared code. As I say in Maverick, "Shared code is code that someone
else wrote that you don't trust."

No comments: