If you have managed to successfully configure User Profile Synchronization in your 2013 environment (which is a daunting task in and of itself) then at some point you are going to have to deal with the personal sites of users who have been disabled or removed from Active Directory. SharePoint tries to be helpful in this regard by identifying account status changes during the synchronization process, deleting user profiles from the database, and notifying the user's manager (if there is one) of the fact that the associated My Site will be deleted in a couple of weeks. Unfortunately, this notification leads to a bit of confusion as the manager can't actually browse to the user's My Site from the provided link. Here's a sample of the system-generated email notification:
The My Site of [USERNAME] is scheduled for deletion in 14 days. As their manager you are now the temporary owner of their site. This temporary ownership gives you access to the site to copy any business-related information you might need. To access the site use this URL: https://[MY SITEHOST]/personal/[USERID]
The above email is generated when the My Site Cleanup Job timer job runs, at which time the user's manager is also added to the Site Collection Administrators group of the target My Site. Trouble is, the link itself doesn't work – browsing to it invokes the PersonalSpaceRedirect control on the default.aspx page for the SPSPERS site template, which checks to see if the current user is the site owner; if not, it redirects to the "person.aspx" page on the My Site host. Note that it specifically checks the site owner and secondary contact properties – it does not check to see what groups the user is a member of. So even though the manager has been given full control of the site collection they still get redirected to person.aspx whenever they try to browse to the My Site default home page.
This would be fine, as the standard links on the redirection page allow for navigation to the deleted user's OneDrive folder and from there to Site Settings and Site Contents, except for one major problem – as soon as person.aspx loads it errors out and the manager gets the ever-so-friendly "An error has occurred" page. For once, the error actually says what the problem is: "User not found". Why? Because the user's profile has already been removed from the profile database during the synchronization process. The page is dynamic, attempting to load the user information from the account name passed in the query string argument, but since there is no such user to be found anymore it throws an error.
The good news is that the original user's My Site is actually still there (well, it's there for 14 days, anyway – after that it's gone). If you know the SharePoint URL structure you can still browse to various system pages like "/_layouts/settings.aspx" and "/_layouts/viewlsts.aspx", as well as certain lists, including "/documents" (the user's OneDrive folder). This is fortunate, as user's don't have the ability to delete the core Documents folder, so if the manager knows the ropes they can just append "/documents" to the end of the link in the email and they're good to go. But not everyone knows this so it would be helpful if the link could be changed to point to the Documents folder instead of the home page but it cannot – there's no supported way (that I've been able to find) to modify the notification email. And this functionality remains broken even after SP1 and the June 2014 CU for SharePoint 2013 (seriously, who tests this stuff – anyone at all?).
So what can we do? Well, if you're up for it, you can write your own My Site cleanup timer job as Kirk Evans describes in this blog post (which is also good reading for background on the entire synchronization and cleanup process). If you want to modify the length of time before a deleted user's My Site is removed, change the email text, or otherwise make changes to the overall process then this is your only option. But what if you just want to address the broken person.aspx redirect problem? Unfortunately, you still need some custom code, but there is a way to do it that's not quite as painful as writing a custom timer job. I'll walk you through a quick solution that I came up with – there are probably a dozen other ways to do this but it solves the problem in a supported way with minimal code to maintain.
At the root of the problem is the PersonalSpaceRedirect control. You can find the control reference by opening the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\TEMPLATE\SiteTemplates\SPSPERS\default.aspx file on any SharePoint web server (NOTE: Please don't modify this file directly – that's unsupported and a bad practice in general). Below all the control registrations and page directives you'll find this bit of markup:
<asp:Content contentplaceholderid="PlaceHolderPageTitleInTitleArea" runat="server">
<SPSWC:PersonalSpaceRedirect runat="server" />
<SPSWC:LabelLoc TextLocId="My SiteContentText" runat="server" />
As mentioned above, this control handles redirection for users who aren't the site owner of a given My Site by checking the site owner and secondary contact properties, and if the user is neither of those then it sends them to the person.aspx page instead (if you'd like to investigate it yourself you can find it in the Microsoft.SharePoint.Portal.WebControls assembly using ILSpy or Reflector). Naturally, the My Site Cleanup Job doesn't make the manager a site owner or secondary contact, it simply adds them to the Site Collection Administrators group. Although the control itself is public several of the dependent methods are not so extending the control with a custom implementation that works properly isn't feasible and completely replacing it with a custom Site Definition is a lot more trouble than it's worth. Instead, we can preempt the behavior of this control by adding a custom redirection control of our own to the page using the delegate control mechanism of SharePoint.
If you've never worked with delegate controls before the underlying principle is simple: they are server controls which get "stapled" to a parent control to provide a method for injecting code into each page in a site, site collection, web application or farm (depending upon how they are scoped). By selecting one of the out of the box controls in the default master pages (or a customized master page with similar markup) that accepts multiple child controls we can add our own logic to the page at runtime. Using this method we can write our own redirection control which checks the group membership of the current user and, if they are a site collection administrator but not a site owner or secondary contact, redirect them to a page which doesn't have the PersonalSpaceRedirect control – like the default OneDrive "Documents" library.
[NOTE: Delegate controls are full-trust code and therefore not compatible with the SharePoint 2013 App Model or Office 365]
Creating a delegate control is pretty simple (refer to this link for a step-by-step walkthrough). Create a new empty SharePoint project in Visual Studio 2013 using the "Full Trust" option and add a new class. Then override the OnInit event with the code you want to run or a method reference (I prefer using method references whenever possible for testability purposes):
namespace BinaryWave.SP.My Site
public class PageRedirector : WebControl
protected override void OnInit(EventArgs e)
Next, add a new method with logic to check the user's group membership and redirect them to the "Documents" library.
protected void RedirectUser()
SPWeb web = SPContext.Current.Web;
if (web.WebTemplate == "SPSPERS")
SPSite site = web.Site;
SPUser user = web.CurrentUser;
string targetUrl = web.Url + "/Documents";
string welcomeUrl = web.RootFolder.WelcomePage;
if (web.UserIsSiteAdmin || site.UserIsSiteAdminInSystem)
if (site.Owner.LoginName.ToLower() != user.LoginName.ToLower())
catch (System.Exception ex)
You can then add an empty SharePoint element to the project and edit the Elements.xml file to specify which control on the page your new delegate control will be stapled to. I chose "AdditionalPageHead" as it normally can be found at the top of a default master page (if you are using a custom master page you may need to alter the control reference). Note that you will need the full assembly name and public key token values for your project – you can get these by compiling the project and using the Strong Name Tool in Visual Studio.
<?xml version="1.0" encoding="utf-8"?>
<Control Id="AdditionalPageHead" ControlAssembly="BinaryWave.SP.My Site, Version=126.96.36.199, Culture=neutral, PublicKeyToken=3a29866fd9ac8366" ControlClass="BinaryWave.SP.My Site.PageRedirector" />
A Feature will have already been created in the project once the empty element was added –edit this Feature to change the name, description, etc. If you have chosen to configure your My Site Host as a separate web application then the Feature should be scoped to "Web Application" and deployed to that specific web application. If your My Site host is in the same web application as your primary content then you may want to add some additional code to prevent the control from executing its payload one very single site in the web application; likewise, if you are using a custom web template or site definition then you'll want to change the reference to "SPSPERS" in the redirection method.
Remember to add Safe Control entries for the project component which contains your Elements.xml file (from the Properties panel, expand the Safe Control Entries collection and Add a new entry with the proper settings) then add the solution to SharePoint and deploy it to the target web application. You can test it by adding a user to the Site Collection Administrators group for a My Site then attempting to load the My Site home page – the control should kick in and redirect you to that user's "Documents" library. There are probably a number of variations on this approach that would enhance the functionality but this is a quick and simple way to solve the "User not found" issue with Person.aspx.
The full source code for this solution is available here (requires Visual Studio 2013). If you just want the farm solution for deployment in your environment the WSP file can be downloaded separately. If you choose the latter option, please test it in a development farm first as your configuration might be different.
Take the trouble out of troubleshooting.
Anyone who has ever tried to do an in-place upgrade from SharePoint 2007 to 2010 knows what a complete nightmare that process could be; without much effort you could easily destroy a farm and spend endless hours trying to rebuild from scratch. The process was so fragile and prone to failure that in the 2013 release Microsoft removed the option altogether, allowing only content database detach and re-attach migrations. This greatly improved the chances of success and is, for the most part, a pretty seamless operation for the majority of deployments. But every now and then you come across a strange configuration that throws a spanner into the works and turns a simple migration into a huge headache.
Case in point. I recently received a distress call from a user who was trying to migrate a single content database from 2010 to 2013 and getting all sorts of strange results. Although the migration appeared to succeed, all the content was over a year old – documents, sites, list items, customizations – everything was out of date. Initially, it was thought that a second database had been mounted at some point during the preceding year but that wasn't the case – trolling through the SQL server backups it was obvious that there was only ever one database for that web application. Since database corruption is always a possibility, they tried restoring and mounting successive backups going back several months but nothing changed – the old data kept showing up. Running out of time in the assigned maintenance window, the administrator then tried restoring the original database back to 2010 and the same thing happened – old content showed up and the new stuff had simply vanished. Now they really had a problem as both environments, the original 2010 farm and the new 2013 farm, were completely unusable.
Curious as to what might have gone sideways, I had them check the AllDocs and AllLists tables in a copy of the current content database for the latest TimeLastModified value to see if it matched up with the results in the user interface (NOTE: This was an offline copy and not a production database – you should never go poking around in production SharePoint databases). That's when we got a real shock. The database actually contained current lists, documents and list items! Although we couldn't see them when viewing Site Contents, there they were plain as day in the database, with the proper metadata values and template associations. And they weren't hidden, either – attempts to get at them programmatically also failed. Something really weird was going on.
After replicating the steps they took a few times on my own, and getting the same results, I decided to have a look at the upgrade log files. Turns out that even though no errors were displayed during the mount operation there were actually a small number of errors encountered during the upgrade process. The first one I came across made it seem as if whole site collections were missing:
07/20/2014 15:49:45.68 powershell (0x0EAC) 0x20C8 SharePoint Foundation Upgrade SPContentDatabaseSequence ajxkz ERROR Database [WSS_Content] contains a site (Id = [aa3fd23b-5c67-4996-a7c6-773d450945d8], Url = [/sites/abc]) that is not found in the site map. Consider detach and reattach the database.
Well that was obviously nonsense as I knew for a fact that the site collection did exist – I could navigate to it after the upgrade completed. It was followed by a warning indicating that "orphaned sites could cause upgrade failures". Um, yeah, I suppose they would, except there were no orphans in this case – the site really did exist. And then a few lines later I found this little gem:
07/20/2014 15:49:45.68 powershell (0x0EAC) 0x20C8 SharePoint Foundation Upgrade SPContentDatabaseSequence ajxk3 ERROR Database [WSS_Content] contains a site (Id = [aa3fd23b-5c67-4996-a7c6-773d450945d8], Url = [/sites/abc]) whose url is already used by a different site, in database (Id = [973995d8-d187-42c3-890e-04031a48811e], name = [WSS_Content]), in the same web application. Consider deleting one of the sites which have conflicting urls.
Huh? How could two site collections have the same URL? Surely that was nonsense also but just to be sure I ran a quick query against the AllSites table – and stared in disbelief at the results. There were five rows in the table when there should only have been three (that's how many site collections showed up in Central Administration for that web application both before and after the upgrade process). Where did the extra rows come from and what sites did they refer to? Well, as it turned out, they referred to the exact same site collections but with an earlier TimeCreated value. Somehow, site collections had been created, then later re-created with the same URL's, without the old ones being removed from the database. The upgrade operation was obviously using the earlier values when it updated the schema and object associations, which explained why objects existed in the database tables that weren't exposed in the UI – it was just ignoring references to site GUID's that didn't match the two it selected. A quick look back at the AllLists table confirmed it, as tp_SiteId did in fact refer to the earlier site collections.
Now that I knew what was happening, the fix was easy. First, I removed the existing site collections which were based on the old instances, using the Remove-SPSite command. I then dismounted the database and mounted it again. This time, with the old site references gone from the AllSites table, the proper site associations were made and the correct content, including all the lists, libraries, documents and items, was restored. Problem solved. The only remaining mystery was how additional site collections were created with the same URL as ones that were already in the database – this shouldn't be allowed to happen. I still don't have an answer for that but at least the customer was able to carry on with their migration without suffering any data loss. I'll put that one in the "win" column.
PremierPoint Solutions (formerly SharePoint Solutions) has recently released an updated version of their Site Provisioning and Governance Assistant product for on-premise deployments of SharePoint 2010 and 2013. SPGA is designed to address a common concern in enterprise SharePoint deployments – administrative control over site growth and proliferation. Anyone who has ever tried to contain the spread of SharePoint in a manageable and consistent way will appreciate how difficult it can be to enforce governance while at the same time allowing users to use the platform to its full potential.
I was fortunate to have a chance to work with the PremierPoint team to bring the new release of SPGA to market. In conjunction with their Extranet Configuration Manager product, SPGA provides administrators with the ability to define a wide range of rules surrounding the site creation process that support governance policies and objectives. Beyond simple workflows and item-level approvals, SPGA provides a toolbox for implementing flexible and repeatable processes for just about any scenario. This latest release includes some new features and functionality but the real news is that there is now a completely free version. You can download and install it on any SharePoint 2010 or 2013 farm and use the basic out-of-the-box actions without any time-limited expiration. I'm a big fan of the so-called "freemium" model and glad to see more SharePoint ISV's adopting it. Obviously, each vendor would like to you to purchase their product after giving it a spin but if you are satisfied with the no-cost features the hope is that you will provide some word-of-mouth validation in return.
If you are so inclined, give SPGA a try and see how it works for you. You can download it here. If you give it a go be sure to let Jeff and the team at PremierPoint know how you like it and any improvements they can make.
Another year, another TechEd over and done with. It was exciting to have a big technology conference back in the great State of Texas and the City of Houston did us all proud. Great times with great friends and lots of interesting discussions regarding SharePoint, Office 365, Azure, and a bunch of other cool products & technologies.
For those who attended my session on OAuth in SharePoint 2013, links to the slides and code samples can be found below. Thanks to everyone who dropped by, especially the four brave souls who volunteered for my human Token Handler demonstration. Many thanks!
See you next year!
Take the trouble out of troubleshooting.
Another great SPTechCon as come and gone. As always, the team at BZ Media did a great job putting on a top-notch event. Many thanks to everyone who attended my sessions and for all the great questions. Links to the slide decks and demo code can be found below. See you all at the next one!
Get Some REST - Taking Advantage of the SharePoint 2013 REST API
Who are You and What Do You Want? Working with OAuth in SharePoint 2013
Rev Your Engines - SharePoint 2013 Performance Enhancements
Configuring Search in SharePoint 2013 can be a tricky process that is best accomplished via PowerShell scripts. For starters, those messy database names with GUID's in them that get created from UI provisioning are just hideous, but the real issue is that a proper topology (meaning search components running on more than a single machine) can only be deployed via PowerShell cmdlets. Despite our best efforts to script the entire process and avoid the kind of small mistakes that lead to endless hours of frustration, it's inevitable that some small setting or configuration step will crop up that creates a giant headache.
Take, for example, the new "SPSearchDBAdmin" role. This role, which didn't exist in 2010, is added to each search database when it is created in SQL server. If you are following best practices and assigning service accounts for search operations (one for administration, one for crawling, and neither should be the SharePoint Farm or Admin accounts), the account you assign as the Search admin will be added to the SQL logins and given the "public" role. That's all well and good for least privileged purposes but that role alone is insufficient for the Search application to function. Unfortunately, there's no warning about this when the Search service application is created – provisioning will succeed but nothing really works. In order to kick Search into gear, you first need to assign the "SPSearchDBAdmin" role to the Search admin account in SQL server.
Assigning the SPSearchDBAdmin Role in SQL Server Management Studio
Also bear in mind that the Search admin account requires read/write permissions to the folder in which the index files reside. As this account should *not* be a local administrator it's very likely that it won't have access to the folders that hold the primary and replica index files. Be sure to assign the appropriate permissions on each server in the topology which contains an index partition (the default location is "C:\Program Files\Microsoft Office Servers\15.0\Data\Office Server\Applications" which, ideally, should be changed as part of the provisioning process). Possible error messages which indicate your search admin account may not have the correct SQL or folder permissions include:
"Content Plugin can not be initialized - list of CSS addresses is not set."
"Unable to retrieve topology component health states. This may be because the admin component is not up and running"
"Topology activation failed. No system manager locations set, search application might not be ready yet"
"Could not access the Search database. A generic error occurred while trying to access the database to obtain the schema version info."
There are a lot of blogs, forum posts, and articles with all sorts of advice on how to deal with these errors, most of which prescribe repetitive un-provisioning and re-provisioning of service applications. Although those solutions may apply to your environment at some point, before going down that road first ensure that the Search admin account has the proper database and file permissions, as no amount of provisioning will overcome basic security limitations.
(Note: For a good walkthrough on Search provisioning via powershell, refer to this post from Ryan Bushnell and the Search cmdlet reference on TechNet)
Take the trouble out of troubleshooting.
For several years now, Steve Smith and the gang at Combined Knowledge have hosted one of the best SharePoint conferences anywhere in the world. Under various guises as "SharePoint Best Practices Conference", "International SharePoint Conference", or "SharePoint Evolution Conference", this annual London event is something that both the attendees and the speakers look forward to – the information is top-notch, the venue perfectly suited to a mid-size event and the entertainment is out of this world. Unfortunately, due to the timing of the Microsoft SharePoint Conference in Las Vegas this year, the event couldn't be scheduled during April and the Queen Elizabeth II Conference Centre is a much sought-after location, being just across the way from the Houses of Parliament and Westminster Abbey, so a later date was logistically infeasible.
So what to do? Well, if you're Steve Smith, you don't just throw in the towel and wait until next year. You take the show on the road! Instead of bringing a bunch of people to a single location, you pack up the whole shebang into a fleet of coaches and send it all across the United Kingdom for three weeks. A crazy idea, to be sure, but no more crazy than putting on a three day event that starts on day one with dozens of top-notch IT Pros and Devs working independently and ends on the final day with a complete set of end-to-end solutions (which we managed to pull off at the International SharePoint Conference 2012 in case you missed it). So why the not give it a go?
And that's exactly what we're going to do. Starting on June 9th, 2014, the SharePoint Evolution Roadshow kicks off in Cardiff, Wales, for a full day jam-packed with sessions from many of your favorite speakers in the global SharePoint community. The show then moves on to London, Cambridge, Birmingham, Nottingham, Manchester, Leeds, and Newcastle, then swings up to Scotland into Edinburgh and Aberdeen, finally wrapping up in Belfast and Dublin on the 24th & 25th of June (I'll be joining the tour for the last portion up north). No matter where you live in the UK an Evolution show is going to be in your neck of the woods this coming June (unless you reside near Fair Isle or Penzance, that is). Even better, the lineup rotates from location to location with only a few speakers going to every city on the tour. So the content will change at each venue, giving you an opportunity to attend more than one event without much overlap. How cool is that?
But the best part is the price. It's only £99 per event! The average cost of a major conference is over £250 per day, making this one of the most affordable learning opportunities anywhere. You just can't beat it. So get out your calendars and AA road planner – it's time for some SharePoint, Evolution style, like nobody else can do it.
For more information and registration details, visit the SharePoint Evolution Conference website.
Take the trouble out of troubleshooting.
Today the good folks at Combined Knowledge and myself received some excellent news – our Support+ app was approved for the SharePoint Store and is now available for download. Although I've been working on SharePoint apps since mid-2012 (yeah, it really has been that long) this is the first commercial app that I've had published in the store. Fellow SharePoint MVP Steve Smith and his crew in the UK did a tremendous job putting all the content together and really making the app look great. My challenge was to take what they had built and turn it into an app for both Office 365 and on-premise SharePoint 2013. There were a number of challenges involved, most notably the limitations of the store licensing model and differing capabilities in the two deployment models, but in the end it was a valuable learning experience.
If you have an Office 365 tenancy Support+ is definitely worth checking out. The app is free and contains a wealth of content that you can utilize with no time restrictions or other limitations. Feel free to take it for a spin and kick the tires. We hope you enjoy it. While you're in the store, take a few minutes to check out some of the other great apps that are available. If you're a SharePoint developer, now is the time to get on the bandwagon and create the next best app the world has ever seen!
Take the trouble out of troubleshooting.
The next SharePoint Conference is almost upon us. From March 3rd through the 6th of 2014 the greater SharePoint community will be descending upon Las Vegas once again to collaborate, communicate and commiserate. This year I'll be going a bit easy on the speaking so I can spend more time networking and getting a feel for what's happening out in the real world but I do have one session on the agenda:
Developing an Intranet on Office 365
Learn how to leverage the power of the cloud to build dynamic, informative and engaging Intranet solutions with Office 365. Get real-world guidance and best practices for driving user adoption and engagement through powerful features like cross-site publishing, metadata navigation and search-driven content, along with proven techniques for custom branding, interface extensions, disaster recovery and lifecycle management.
For the rest of the event I'll be lurking around the exhbitor hall and, of course, at the various parties and social gatherings. I'll be going back to my roots a bit and spending more time in the vicinity of the Office/SharePoint kiosks this time around. I'm also looking forward to engaging in conversations around the 2013 app model, cloud development, SmartTrack, and some exciting new offerings from partners such as AtomOrbit, Combined Knowledge and The SharePoint Shepherd. Keep an eye out for the hat and feel free to say "Howdy!".
P.S. - If you haven't registered yet, get to it. This event will almost certainly sell out and you don't want to miss it!
UPDATE 3/5/2014: Thanks to everyone who attended the session. The slide deck can be viewed on SlideShare and the code samples can be downloaded here. Viva Las Vegas!
Take the trouble out of troubleshooting.
With the release of the app model in SharePoint 2013, and all the subsequent marketing hype surrounding Office 365 and the determination of Microsoft to push their customers into the cloud, many have predicted that the end of full-trust farm solutions was nigh. For a while, especially if the rumors swirling within the greater SharePoint community were to be believed, it even seemed as if on premise deployments were going to become a thing of the past. Thankfully, after thoroughly confusing the market with mixed messages and an over-emphasis on all things cloudy, Microsoft has finally put these rumors to rest and, with the announcement of the impending Service Pack 1 release for 2013, made it clear that SharePoint as we know it is not going away anytime soon.
So what does this all mean for developers? Should you continue to invest in full-trust, on-premise solutions or move to the new app model? The answer, of course, is "it depends". It all comes down to your requirements – there are some scenarios where apps are the right answer and others where full trust solutions will be more appropriate. But let's be clear about one thing – isolated code execution via remote interfaces is the future of SharePoint development. Period. That doesn't mean you need to throw out all your existing code and start over from scratch but it does mean that you need to start thinking about, and planning for, a new way of doing things. Getting a head start on it now will pay dividends in the long run so there's no reason to delay – if you're not yet familiar with building SharePoint apps, then now would be a good time to get started.
But the question still remains – if both models are viable then how do you determine which is the right one? The first step is to understand the direction your organization is heading. If you are a small, nimble company with a significant investment in cloud technologies already, then odds are you will soon be moving to Office 365 for core collaboration features. This means you should start designing all your future solutions as apps, since that will be the only model available to you in the cloud. However, if you are a larger organization that doesn't move as quickly, or you have good reasons for keeping your collaborative environment in house (either on your own hardware or in hosted by a managed service provider), then you have a lot more to think about. Knowing that you will continue to have an on-premise deployment of some sort makes it easier to chart a course for supporting existing full trust solutions but what about green field projects? It's easy to keep doing what you already know how to do but what if the organization adopts a hybrid approach sometime in the near future? How would that impact your solution architecture? Could you refactor into an app without causing massive delays or cost overruns?
In many cases, the application requirements themselves dictate which model to use. One of the most powerful aspects of SharePoint (and, arguably, one of the key reasons why it has been so successful) is the rich API's which provide seemingly endless opportunities for customization. As a middle-tier platform for enterprise web applications SharePoint stands alone in terms of flexibility and extensibility (not to mention the enormous set of features available right out of the box). If you are tasked with creating a web-based line of business application with integrated authentication, data storage, social interaction and collaborative capabilities, you would be hard pressed to find a better framework to build upon. Depending on how deep you need to go into the SharePoint stack, you may find that only the server-side API's will suffice – typical scenarios include public-facing web sites, scheduled task execution via timer jobs, web or enterprise content management functionality, integration with highly-secure backend systems, extensive interface customizations, long running operations on large content-based data sets, email-enabled lists, custom workflow actions, service applications and so on. If your application requirements fall into any of these categories, or you simply need access to a set of API's not covered by the provided remote interfaces (CSOM/REST), then a full trust solution is the correct – and only – option. Design and build it with confidence that, at least for the time being, you're on safe ground in terms of future supportability.
Before you sign off on the final design spec for another big batch of server-side code, though, stop and ask yourself this question: How much of the code actually has to be in SharePoint? After more than a decade writing custom solutions for SharePoint, I've found that less than 20% of the code I've written actually uses the SharePoint API's. The vast majority of it is just standard .NET components and classes. Sure, there have been a few SharePoint-centric solutions that have required deep integration with the dark inner workings of the platform, but by and large that's not the case. Try this – revise your application architecture, separating the core functional elements from the desired integration points and see what you are left with. Based on my experience, I would suggest that in most cases the result would be a blueprint for a provider-hosted app; that is, most of the functionality doesn't require SharePoint at all and the bits that are left can possibly be achieved via the remote API's. Look back on the last few SharePoint solutions you have built and you might be surprised how many of those could run quite happily on IIS somewhere with just a bit of CSOM to facilitate authorization, perform CRUD operations, etc. Now think about how many standalone .NET apps you have in your organization and how they could be benefit from all the SharePoint goodness with minimal code revisions if they were to be repurposed as a provider-hosted app instead of rewritten from the ground up as a traditional SharePoint solution.
The truth behind the marketing hype is that no matter how much the cloud providers want to push you into their service offerings most organizations, especially larger ones, aren't prepared to make such drastic changes so quickly. Even if they want to move wholesale into the public cloud many organizations aren't ready yet and some never will be (finance, defense, healthcare, government, transportation, etc.). Full-featured, on-premise SharePoint is too valuable and too deeply ingrained in many organizations to simply be swapped out for a much more limited feature set in the cloud, no matter how attractive the pricing or outsourced infrastructure may be. For those customers, full trust solutions will continue to be an integral part of the SharePoint story, even if they end up deploying some sort of hybrid architecture (which, in my opinion, is the most likely scenario). This is as it should be – customers who make that kind of investment into a platform should be able to take advantage of its full capabilities (within supportable boundaries, of course). But that doesn't mean full trust is the only option – the app model offers a solid value proposition for certain scenarios and has the added advantage of being portable to the cloud if and when that move happens.
In summary, my advice is to build apps when you can, full trust solutions when you must, and get on board with the current programming trends. It can't hurt and you might be surprised how much you can accomplish with the 2013 app model. Plus, the more SharePoint developers we have building apps, the more we can guide and influence the future direction of the platform. And that's a good thing!
SharePoint is Talking. Are you Listening?