Finally. FINALLY. F-I-N-A-L-L-Y! For years I've been telling everyone who would listen about the negative consequences of performing looping operations in the object model (OM) against large data sets. Our internal testing proved this to be the case. Our customer's experiences validated our own results. And just plain common sense said this was a B-A-D idea. Only problem was, I never found time to do the type of empirical testing necessary to really prove my point.
Now, thanks to Steve Baschka of Microsoft, I'm no longer a lone voice in the wilderness. Steve has just published a white paper
to the SharePoint Products and Technologies Team Blog
examining the performance aspects of various data access methods against moderately large lists (sorry AC
but 150,000 items isn't a "HUGE" list in the type of apps we work on). Steve's research proves emphatically what I've been saying all along - the OM is NOT performant for extensive looping operations no matter how many times you see it done in the SDK. Just to prove how misunderstood this issue is, I've even had Microsoft engineers argue with me that the OM is the one and only best way to access list data. Well, now we can all call bovine scatology on that. Web Services, CAML Queries, even plain 'ol ADO.NET data sets are all faster than the OM when it comes to retrieving data from lists.
Does this mean you should permanently banish foreach () from your code? Of course not. You just have to be careful when and where to use it retrieve data. It's best for situations with small datasets that are small and mostly static - security group membership, lists in a specific site, profile database fields - but avoid it altogether for large sets of data.
Thanks, Steve. You've made my day. Make that my month. Heck, just call it the whole year!