With the publication of the SharePoint Guidance code samples and documentation from the Microsoft Patterns and Practices group, the issue of Test Driven Development in a SharePoint environment has gained new prominence. A great deal of discussion is being devoted to this topic in the blogosphere and various communities (see Andrew Woodward’s excellent posts on the subject here), with most of the give and take focusing on the use of TDD in Agile development projects, the implementation of mock objects to simulate SharePoint objects and methods, and the integration of unit test utilities, such as nUnit, with Visual Studio.
In other words, a lot of people are talking about the what and the how but not much is being said about the why. It seems that there is a lot of agreement that TDD is the way things should be done but not much discussion on what quantitative benefits it provides or why an organization that doesn’t currently use TDD methods should change the way they work to facilitate this new and supposedly better way of doing things. There also seems to be much confusion over the use of TDD with SharePoint, just as there was about Agile and SharePoint back when v2 came along, with differing opinions to be found just about everywhere one looks.
So what’s really going on here? Is TDD for you, and, if so, why? What value does it bring to the Software Development Life Cycle (SDLC) in an enterprise environment? And does it work with SharePoint?
In this series of posts, I set out to answer these questions from my own perspective. Your perspective may differ and, if you have contrasting thoughts on the matter, I invite you to comment and make your voice heard.
A New World Order
Let us begin by defining what, exactly, TDD is, what it is meant to accomplish, and how it is implemented. TDD is, by definition, a methodology for developing software that focuses on testing the validity of individual components (methods, properties, behaviors, etc.) at an atomic level before they are compiled into a larger construct (such as a class). The developer writes a test case to validate that a component produces the desired result by returning a boolean value – either the test passes (True) or it fails (False) - based on the assertions set forth in the test case.
TDD grew out of the Extreme Programming and pair-programming methodologies of the late 1990's, which were founded on the theory that two developers (or, in the case of XP, a team including designers, developers, and testers) working side-by-side could be more productive than a single person working on their own - if for no other reason than developers tend to get bogged down in their own logic and another set of eyes can see solutions that might otherwise be missed. In this scenario, it made sense for one programmer to write a method and another to validate it; thus, the design focus underwent a subtle but important shift from "how will I write this method to meet the required functionality" to "how will I write this method so it passes the test case" and TDD was born.
The theory behind unit testing and TDD, like many other methodologies that came before and after, is that higher-quality code is produced by focusing on creating testable classes and methods as part of an overall "test first" design scheme. This is a fine theory in and of itself, and certainly one with good intentions, but like any other theory it has a large number of drawbacks and pitfalls (a situation which seems endemic to all software design methodologies) that make it a drastically different beast in practice than it would appear when diagrammed on a white board or bantered about in academic circles.
First of all, companies quickly discovered that pair-programming is a tremendously inefficient process. As the old saying goes, nine women can't have a baby in a month, and two programmers can't write code in half the time. For starters, getting two programmers to agree on any particular practice or pattern is nearly impossible. Furthermore, good software engineers are expensive and in high demand - putting two of them on the same coding task is rarely cost-effective. Finally, it has been my personal experience that two coders working side-by-side tend to engage in more discussion about how things should be done in a perfect world than they do actually writing any code. They are by their nature perfectionists, and each always wants to prove that they have the best and most innovative ideas. Your experience may differ but I have found that productivity drops significantly when two people spend a good portion of their time bantering about esoteric design considerations and the "best" way to do something instead of focusing on the task at hand.
Second, it is always dangerous to focus so much attention on one part of a process in isolation to the degree that it becomes a self-feeding echo chamber. This has the unfortunate tendency to elevate a tool or methodology to cult status, catalyzing the creation of an attendant philosophy complete with its own lexicon and hierarchy, resulting in a rapid shift from a means to and end into the end itself. Adherents of this new creed wax evangelical about its ability to solve nearly every problem anyone will ever encounter in the course of developing software applications and become greatly offended when anyone disparages their newfound faith. Worse, they quickly forget that there are other ideas out there with equal validity and that their particular job function is only a piece of the overall puzzle - just because you found a great new method for making a whosawhatsit doesn't mean that it applies to the rest of the whirlygig manufacturing process.
Third, and mostly due to the phenomenon described in the previous paragraph, developers who become adherents to the new programming religion of the moment quickly slip into the fatal trap of assuming, despite all evidence to the contrary, that their methodology makes the code they create bulletproof. Naturally, when their code fails, they point the finger at everything else - the network, the servers, the operating system, anything at all that doesn't reflect back on them, because to admit failure would be to invalidate the core tenet of their faith - namely, that the methodology they use is foolproof (or, at least, better than any alternative). This leads to a lot of finger pointing, endless "he said, she said" meetings, and zero progress towards resolution of problems. You can see this in action by listening to any TDD adherent as they bandy about phrases like "100% code coverage", "atomic validation", "no debugging" and other such phrases designed to reinforce their own misguided sense of infallibility.
The Quality Assurance Lifecycle
For the purpose of gaining some perspective, let's step back for a moment and examine the place of unit testing within the overall quality continuum. The Software Quality Lifecycle, or one of its many variants, is the driving process behind assuring the production of quality software. While slightly different from project to project and company to company, it always resembles something akin to the following:
UNIT > FUNCTIONAL > INTEGRATION > REGRESSION > PERFORMANCE > USABILITY
It is vitally important to emphasize that unit testing is only one part of the overall quality assurance lifecycle. In fact, in a properly structured QA environment, unit testing actually has a lower priority than several other phases (note, however, that it does not have a lower importance, although it can be argued that the cost of fixing a bug in unit vs. functional testing is largely negligible). Creating software that is inherently unit testable satisfies only ONE of the phases in the QA cycle - it has no direct impact on the other phases. It is often argued, usually by proponents of unit testing, that if a particular set of code passes its assigned unit tests then it follows logically that there will be fewer exceptions in subsequent test phases. This is a myth; unit testing only guarantees the validity of the software against the prescribed test cases - it does not take into account any variables from other phases (nor, in truth, should it). In fact, it is my belief that the improper assumption that unit-tested code is inherently more trustworthy than code that has not been unit tested is one of the primary factors behind the high failure rate of modern software applications.
Why is this? Because testers don't write unit test cases - developers do. This, my friends, is a classic example of inmates running the asylum. Developers write tests that support their particular coding methodology - they do not write tests that correspond to the user expectations or functional requirements. If I make the rules, and police my own rules, then it stands to reason that I will not violate my own rules. This, combined with the ridiculous notion that one should "write the minimum amount of code required to make a test pass", is why we still have software products with hundreds or even thousands of bugs discovered after the product is released. If testers were to write unit tests you would see a completely different paradigm result as QA personnel are more closely aligned with the user requirements than developers ever will be; in point of fact, most developers exhibit at best, apathy, and at worst, downright scorn, for the people who use their products. An experienced tester never focuses on the result of single test or group of tests; rather, they think in terms of how the test results relate to the user's expectations. Instead of accepting the mantra of "write the minimum amount of code required to make a test pass", a tester would tell a developer to "write the minimum amount of code needed to meet the user requirements" - a small but tremendously important difference.
I'm sorry to point out this blatantly obvious fact but developers don't really know squat about quality and even less about how to achieve it. The trouble is, nobody ever trains developers how to test products; in fact, they rarely, if ever, bother to train them at all, and certainly not on any type of quality assurance methodology. Rarely do developers ever know who their users are or how the product they are creating is actually being used, or will be used, in the real world. This problem is endemic to the system, as we treat software development as a skill and not a discipline. Seek out a structural or mechanical engineer in a position of authority in his or her particular field and you will find someone with a breadth of knowledge about how the systems they design are used, who uses them, what challenges they face and how that system solves those problems. Try to find someone with the same level of experience and knowledge in any software development group, either within the enterprise or at an ISV; you won't find very many if you find any at all. If they do exist, they are usually off in a back room somewhere, consigned to supporting legacy systems that the young hotshots (who, in one of the most bizarre paradoxes of the modern era, usually make more money than they do) wouldn't ever consider touching. Instead, they should be paid primarily to provide their wisdom and experience to the rest of the team.
Consider the example of a modern automobile. These are complex pieces of engineering with thousands of moving parts that must all fit together seamlessly in order to meet their designed purposes (not unlike a computer operating system). Manufacturers design the product then provide detailed specifications to their suppliers to produce all the parts necessary to assemble the final product. These suppliers, naturally, are responsible for creating each part and insuring that it meets the defined requirements (in other words, unit testing). When each part is shipped to the manufacturer they can then be combined to create an automobile but - and here is the most important thing - nobody knows if it will work until all the parts are assembled and tested as a holistic system (functional and integration testing). Just because a supplier provides a part that they have performed their own quality testing upon doesn't guarantee that it will work with all the other parts. The design could have been wrong, the specifications vague, the requirements lacking a particular detail, etc. If the assembly plant simply assumed that, because the parts are guaranteed to work, they could put them together and ship the product without performing their own exhaustive battery of tests, we would all be riding horses to this day.
The key difference between the automobile industry and the software industry is that the suppliers are usually engaged early in the product design cycle and every one has dedicated quality engineers - not just testers but professionals who understand quality inside and out and know how to implement processes to insure that the final product is up to snuff. In addition, these professionals often have years and years of experience in all phases of the industry so they understand what impact their decisions have upon the subsequent phases of the cycle that occur once the product leaves their plant. We have no such comparable positions in the software field at any level, which in and of itself guarantees that no methodology, however well thought out or intentioned, is ever going to result in a dramatic increase in the quality of custom software, regardless of what you call it or how many people jump on the bandwagon. This is even more true of SharePoint, for which there are precious few individuals who even understand how the product works, much less who can analyze a piece of code with an eye towards how that code will function when deployed or what impact it may have on all the other interrelated pieces of the framework.
Theory vs. Practice
Perhaps the most critical condemnation of TDD and its cousins is the vast gulf between the theory of the methodology and the actual results in practice. TDD assumes that everyone in the development lifecycle both understands and is willing to assume the overhead of enforcing the restrictions of the methodology. For developers who come from a different background or who have a different understanding of SDLC this an be an extreme test of patience and perseverance. Nothing invites more confusion than first having to swallow the profoundly unintuitive practice of separating every method into a separate class, thus creating a maze of references and inheritances that make the tax code seem simple by comparison, and then having to assume the burden of writing two, three, sometimes ten times as much additional code in order to insure that those very same classes meet the defined requirements. Then, of course, comes the devilishly difficult task of maintaining the code months, sometimes years, after the people who once understood it are gone.
Assuming that these obstacles can be overcome, which is no small assumption, one then must quantify the cost of such efforts against the actual, not perceived, benefits. Certainly, unit-testable code provides certain level of comfort with regards to the underlying code base and the ability of the developer to meet the stated requirements. But what does it really cost, both in initial and long-term levels of effort, and does it really deliver code that is so much better than any other methodology? These are business questions that, as is usually the case, adherents of software design methodologies rarely consider or even understand; to them, it's all about the purity of their practices, with little or no regard to what the true bottom-line impact may be.
Take, for example, the following snippet of code:
private SPListItemCollection GetListItems(string Url, string ListName)
SPListItemCollection coll = null;
SPSite site = new SPSite(Url);
SPWeb web = site.OpenWeb();
SPList list = web.Lists[ListName];
coll = list.Items;
Now, a quick glance at this method by anyone with a modicum of SharePoint development experience would reveal a number of potential problems. But let us approach this from the perspective of TDD and see how the methodology handles (or fails to handle) this situaiton. For purposes of brevity, lest this post turn into a treatise and from there into a novel, I will refrain from showing actual test case code but will instead simply describe the test cases (but I do invite the TDD folks to post a response with their own test cases).
First, the TDD mantra states that we a) must write the test case first, which must always result in failure, and b) write the absolute least amount of code to make the test pass. So to begin, we need a test case that simply calls the empty method. Whoops, hold on - we've encountered our first issue. The method in question is private and must be made public. Okay, not a big deal, but wait, there's more - proper TDD with mocks requires Dependency Injection (or Inversion of Control, whichever you prefer). Thus, our method must now become a class, which requires both an additional element in our project and a reference dependency in our core class (let's assume that we are creating a web part for purposes of this example). So our simple snippet of code must now contain all the overhead of an extracted class which, in this case, isn't so bad in and of itself, but could become quite onerous if the method were more complex or the project quite large. Nevertheless, let us proceed to the next testing challenge.
Our second step is to start writing test cases to validate our code. For this simple example, no less than (7) tests are required:
- Missing or empty URL variable
- Valid URL variable syntax
- Existence of the specified site
- Missing or empty ListName variable
- Existence of the specified list
- Valid SPListItemCollection return object
- Null or empty SPListItemCollection return object
Hmm, that's a fair amount of code to write - at least five times the number of lines the original method required. But worse than that, four of our seven tests are impossible to validate. Why? Because TDD requires a disconnected and isolated test environment; that is, the tests must run to completion at the time of compilation, before the code is deployed (remember, we're working with SharePoint here, not a locally-hosted ASP.NET application). Can you tell which tests are out of bounds? I'll help you out - numbers 3, 5, 6 and 7. Why? Well, in the case of numbers 6 and 7 it's easy - our test classes don't have a clue what an SPListItemCollection object is. Ouch. Now what?
To solve this problem, we need some mock objects. Well, ok, but where do we get mock objects for SharePoint? At present, there is only one solution for this - TypeMock Isolator for SharePoint. But wait - Isolator is a commercial product. You have to buy a license to use it in your development process. Putting aside the obvious disparity between the promotion of community tools by development methodology cheerleaders and the necessity to buy a commercial product in order to implement such methodologies, let's face the obvious question: Do you have complete faith the the vendor has fully implemented every method, property and dependency of the core API in it's mock objects? In other words, the SharePoint API is the largest ever written by Microsoft and we're supposed to accept on faith that TypeMock has reproduced, in its entirety, a fake collection of objects that matches this entire API? And what happens with each release of a cumulative update or service pack, both of which have been known to introduce changes to the API without warning? Are you willing to bank your production code, and, consequently, your reputation and job security, on the ability of a third-party to keep pace with the SharePoint product group?
This is a very serious problem. Mock objects and SharePoint simply do not coexist well, as I will endeavor to demonstrate a bit later. For now, let us address the two remaining tests in our sample that are giving us headaches, namely numbers 3 and 5. It is here that the TDD model and SharePoint stand in stark opposition to one another. There is simply no way, if you desire to strictly comply with the guidelines of TDD, that you can validate the existence of a specified site, or a list within that site, without have a live connection to the target system (or at least a development system with a similar structure). You must assume, in order to pass the unit tests, that the specified site URL and list do, in fact, exist. But unit testing is not about assumptions it's about validations. Assumptions violate the core principles of the methodology. So here's the stark reality, revealed by a very simple example: THERE IS NO SUCH THING AS 100% CODE COVERAGE.
It is simply not possible, especially when creating software that is one small part of a much larger system, to test all the variations, permutations, parameters, outcomes or possibilities for each line of code. This is a myth and anyone who tells you different - in other words, claims to be promoting a methodology that promises bulletproof code - is either dishonest or delusional. If we could all agree upon this very simple truth then it would be much easier for organizations to adopt practical components of unit testing and discard all the hocus-pocus, thereby improving the overall quality of their code without forcing them into a rigid set of methodologies that, in and of themselves, provide little lasting business value.
Speaking of value, let us continue with our example, in order to determine the true scope and cost of TDD in this scenario. Assuming (you will notice that there are a great number of assumptions involved here, the irony of which, I hope, is not lost on anyone) that the tests for numbers 3 and 5 are, in fact, feasible, we are then confronted with the next operation; namely, writing the minimal code required to make the tests pass. I have already done this in the provided example, using only six lines to satisfy the dozens of lines of test case code that are required to validate it. But here is the real question - just because my tests pass does that mean I have written good code? It does if you follow the strictest definition of TDD. The tests run, the code passes, we must move on to the next method or class. Except, of course, for a few critical problems; namely, that the code above does not follow best practices, is guaranteed to fail at some point (likely sooner rather than later), does not properly manage resources, and contains no verification logic. It is, in fact, very poor code.
Here is how the code should be written:
private SPListItemCollection GetListItems(string Url, string ListName)
SPListItemCollection coll = null;
if (Url != String.Empty && ListName != String.Empty)
using (SPSite site = new SPSite(Url))
using (SPWeb web = site.OpenWeb())
SPList list = web.Lists[ListName];
if (list.Items.Count > 0)
coll = list.Items;
catch (System.Exception ex)
catch (System.Exception ex)
[Note: I realize that everyone does things a little bit differently. Some might choose to use Site.AllWebs while others might insist upon catching a specific exception instead of a general system exception (and, of course, logging the error in a more descriptive manner). Note also that I elevated permissions in order to insure that the list could, in fact, be retrieved, even if the requestor does not have sufficient permissions to do so. This may not suit your requirements. Feel free to modify it as you see fit to suit your particular preferences.]
Now, read the list of required test cases, then read the revised code snippet again. Do it one more time. Notice anything? Look closely - the code itself satisfies all the test requirements without requiring any level of abstraction, dependency injection, nUnit, or other such artifices. In fact, it achieves what TDD fails to do and that is handle - elegantly, I might add - numbers 3 and 5 on our list. Furthermore, it takes into account both user permissions and proper disposal of objects, neither of which can be handled effectively by abstracted test classes. I realize that the TDD folks are jumping up and down at this point, screaming that this isn't really testing at all, but that is entirely the point - you don't have to unit test all the variants if you handle them within the code itself; remember, this isn't about testing for testing's sake, it's about meeting the user's requirements and delivering higher-quality output.
So now let us examine the business value of both approaches. The TDD method requires a great deal of abstraction and dozens of additional lines of code. It also fails to handle several critical validations and does not enforce any resource management or security elevation. So what the company gets is a block of poorly written code that, while it passes several tests, is hard to manage, difficult to validate and took a great deal of extra time to construct. Not to mention the expense of purchasing, learning and deploying a third-party utility to implement mock objects. On the other hand, the direct approach (let's call it inline validation for lack of a better term), satisfies all of the requirements and more in just a few lines of code that is easy to read and maintain. To take it one step further, if the company now decides to implement unit testing as part of their SDLC the developer need only write a single test case to validate that the method returns an SPListItemCollection object because the rest is handled better inside the method itself (you could, in fact, write test cases that use generics instead of mocks to perform the required validation but at some point you will need a fake object to compare against so you can never completely get away from the mock issue).
So what are we trying to say here? Simply this - in order to get real value from TDD in SharePoint you must already know how to write good code. All the unit tests in the world won't change this fact. And the only way to learn how to write good code is to do it over and over again, gathering knowledge along the way from those who have gone before you. This leads the the fundamental problem with TDD as a methodology - it doesn't teach developers how to write good code; rather, it teaches them how to write testable code. We as a nation (I'm referring to the United States here for all my international readers) are currently suffering from an educational crisis brought about by, in large part, our focus on teaching children how to pass tests instead of teaching them how to solve problems and be critical thinkers. So here we are, in spite of knowing this and lamenting it ad nauseum, tripping over ourselves to write software that focuses on test coverage instead of feature requirements and best practices. Doesn't that strike you as the opposite of what we should be doing? Shouldn't we be building best practices and design patterns into our development tools so that we can automate what is known to work and come up with solutions to new challenges?
Further, we are introducing an even more dangerous element into the equation - overconfidence. Developers, especially green ones, who accept the tenets of TDD with blind faith, come to believe that they are writing good code, when nothing could be further form the truth. They are, instead, erecting a self-consuming edifice which requires more and more infrastructure to maintain its own existence. The more code you write, the more tests you need. The more tests you have, the more code optimization is required. It seems to exist only for the purpose of existing; in other words, the theory has value but the implementation is sorely lacking.
Ghosts in the Machine
All of this leads to another fundamental question: is TDD itself flawed or is SharePoint flawed? Well, to be sure, it's a bit of both, but mostly it is the application of TDD methods to SharePoint that is at the root of the problem. Simply put, there are too many variables within the SharePoint ecosystem to make current TDD methods practical. The lack of localized development environment, in which all of the necessary parameters could be specified and which would obviate the need for mock objects, would be a great step forward in solving this dilemma. But, to be fair, the development effort required to create such a platform would be breathtaking in its scope and complexity. Can you image putting together a virtualized server farm environment, complete with Shared Service Providers, all the various server roles, enterprise features, and attendant configuration requirements that runs as a Visual Studio plugin on Windows Vista or 7? Cracks and shards, that would cost nearly as much as developing the real thing (you get extra credit if you figure out where the epithet I just invoked is derived from)!
Mock objects, I'm afraid, are a very poor substitute for deployment and testing on a real instance of SharePoint. They are unwieldy, incomplete, and fail to deliver adequate results. They are a square peg in a round hole, failing to solve the problem for which they were designed. It is far better, and far, far more efficient, to wrote good code with inline validation that runs against a live system than to rely upon mock objects to externally validate your methods.
Still not convinced? Then take the following two examples - user profiles and search. Would you care to be tasked with the creation of a mock object that simulates either one of these core system components? And, even if you could, how would you account for the variances in how those components perform in production? Want to take a stab at the hundreds of profile properties and how they can be corrupted by a bad AD import or have endless combinations of nonsensical values based on what someone put into the directory service years ago? How about search results, including scopes, crawl sources, managed properties, federated results, and so on? And how they might change based on the inclusion or exclusion of a single character or symbol? Are you up to that task?
I would be remiss if I did not take this opportunity to point out the biggest Achille's heel in the entire mock infrastructure: security. Or, to be more precise, a complete disregard for security as it relates to code execution. Just about everything in SharePoint is dependent upon the security context of the user and the context in which the code is executing (remember, we're dealing with two sets of permissions - those which you grant individual users and those which you grant a specific assembly or set of assemblies). Mocks have absolutely no capability to handle security context in either instance, insuring that any code you "test" with a mock framework will have to be retested in the proper security context at a subsequent stage, thus leading to the release of code which, despite passing all the unit tests, is almost guaranteed to fail. This is almost worse than performing no testing at all; at least with zero test cases you have no preconceptions that your code is actually in a working state and that it's ready to pass on to the quality assurance team. What is the point of investing time and effort into a methodology that can't even handle the most basic elements of the system for which you are writing code? How does this make any rational sort of sense?
Some will say, and probably already have, that these are edge cases and do not reflect a failure in the methodology, that I am simply choosing examples that I know don't fit the model. Pardon me for saying so but that's a complete load of bollocks. TDD's proponents endlessly espouse the ability to achieve 100% code coverage in all situations. Anything less and you aren't doing it right. Hey, they made the rules, not me; just because SharePoint has a number of core elements (and there are more than a few - need I mention the BDC or the publishing framework?) doesn't mean SharePoint is somehow at fault - it means that TDD loses a great deal of value in an environment where the developer does not strictly control all the variables. You will notice that I am not pontificating on TDD with any other platform, although it certainly may apply there as well, I am speaking specifically about SharePoint. And in that context TDD on it's own is of limited value in helping to create better, more sustainable and highly optimized code; in fact, it only adds any real value at all when integrated with a comprehensive quality assurance strategy put into place by programmers with deep knowledge of the SharePoint API.
The Road Ahead
So where does all this leave us? You may be under the impression at this point that I am completely against any type of testing or design methodology. Au contraire. I am simply advocating caution against slavish devotion to a particular methodology at the expense of the primary objective (which is to deliver quality code, in case anyone has forgotten). I am not, in point of fact, against unit testing or design patterns (but I do vehemently protest against anyone who thinks that TDD is a true design methodology - it most certainly is not even if 'design' is often wrongly inserting into the name). Nor am I against SDLC methodologies on the whole - we are desperately in need of better ones. I utilize a combination of methodologies, including MVP, Agile, a little Scrum, and some focused unit testing. I also have made it a point to learn, as extensively as I feel is possible, how the system I am programming against works, so that I may write better code that handles validation within the code itself, rather than relying upon any external framework. But I also realize that, from a quality perspective, the code creation process is only one piece of the puzzle. Unit testing has value but it pales in comparison to extensive integration testing, feature verification and user acceptance testing. An important piece of the recipe, to be sure, but not the whole enchilada.
What I would like to see happen in the future is for fewer people to keep jumping from fad to fad trying to find development nirvana and instead focus on the practical elements of writing good software. There are things we can do - must do, in fact - to add structure and discipline to our profession but they do not involve quasi-religious devotion to myopic methodologies and a dependence upon halfway measures that give us a false sense of infallibility. Instead, we must focus on improving our tools, educating the greater development community, and - dare I say it! - listening to our customers. I assure you, most CIO's (and all CFO's) don't give a rat's rear end about what methodology, pattern or practice we use. They want quality software that doesn't break every time you look at it wrong and doesn't require a team of geniuses to maintain.
In the second post in this series, I will propose a process map for SharePoint development that takes into account not just unit testing (and, yes, that includes a fair amount of thoughtfully-applied TDD) but also all the other phases of the software quality lifecycle. What I will propose is a common-sense development paradigm that does not stand upon one single methodology but instead incorporates multiple methodologies and is flexible enough to adapt to various design patterns and programming techniques. It won't be anything earth-shattering or prophetic but it might surprise you how many sensible and practical ideas have fallen by the wayside in our misguided quest for some sort of development holy grail.
But before we get to that, I want to hear from you. I know there will be a lot of people reading this who will take offense to my criticisms. Still more may agree with me on certain points but not on others. A much smaller group may agree with me entirely - for them, I pledge my deepest sympathy for all they have endured to reach such a state of enlightenment or bewilderment (the difference, of course, being defined by where you stand on either side of the argument). I urge to you to not make any knee-jerk responses; instead, read what I've written several times before responding and see if you can endeavor to make a reasoned argument for or against my conclusions, preferably accompanied by examples and references. I would like this to be a conversation that everyone benefits from, not a flame war in which everyone gets their fingers singed. Of course, if really think I'm just an ill-informed ignoramus, then feel free to post your opinion but I doubt it will do much to sway anyone to your position; it may, however, provide a healthy dose of entertainment, which I welcome at all times. For those of you who have your own soapbox on the 'net and whose thoughts on this matter go beyond the realm of a simple comment or reply, please post a link to your blog entry and I will include all such links at the end of the main body of this post for easy reference.
Let the discussion begin!
SharePoint and TDD - The Other View