Home > Blogs > Eric Shupps | The SharePoint Cowboy
​The SharePoint Cowboy


SmartTrack for SharePoint Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave

 
photo of  Eric Shupps
BinaryWave
611 S. Main St., Suite 400
Grapevine , TX , 76051 USA
December 06
Creating MinRole Compliant Custom Services for SharePoint 2016

One of the most significant new improvements in SharePoint 2016 is the MinRole funcionality. Any SharePoint administrator who is responsible for maintaining a farm consisting of more than three or four servers can attest to the difficulty of maintaining the proper allocation of services on each server. In a properly distributed farm architecture, each server should serve a specific role, such as search, distributed cache, web, or web front end, and services assigned to that role should be the only ones running on each server. This makes the overall administration of the farm much simpler, as there is never any question as to which machine is doing what and how pulling one out or adding another one into the environment will have an impact.

Although this is a desirable state for any multi-server farm, in the past it has been difficult to achieve as farm roles were ill-defined and there were no management tools to ensure that servers which fell out of compliance by running services not allocated to their role could be reset to a proper state automatically. The introduction of MinRole changed all that by forcing a decision during the installation process on which role a server should be assigned when it is added to the farm. This role assignment is then enforced post-provisioning by automated health check rules which identify servers running services that do not correspond to their defined role and returning the server to the desired state by automatically stopping non-conforming services.  In essence, it allows the SharePoint farm to monitor itself and take corrective action in an effort to improve functional stability. 

[MORE INFO: Visit the following link for an overview of the benefits of MinRole, along with guidance for planning and managing a MinRole deployment:  https://msdn.microsoft.com/en-us/library/mt346114(v=office.16).aspx]

In order to support such fine-grained control over service allocation, the core services within SharePoint had to be modified to recognize the redefined server roles and comply with the new health check rules. Customers upgrading an out-of-the-box deployment from 2013 to 2016 automatically gain the benefits of MinRole compliance for all the default services; however, customers running custom services, whether developed internally or provided by a third party vendor, face a more challenging upgrade path. Custom services which ran fine on all servers in a 2013 environment may fail to provision in a 2016 MinRole environment and those that can be provisioned are likely to be disabled the first time the health check rules are executed. To prevent this from happening, developers are encouraged to update their custom service and service instance code to be MinRole compliant.

At first glance, this should be a simple matter of checking for service roles and returning a value indicating whether or not the service instance can run on the specified server. In essence that is, in fact, the end result; however, getting there is not quite so straightforward as just adding a new method to a service instance class, evaluating a role type enumeration and returning a boolean value. There are several steps that have to be done in the right order to get everything working properly so the service can be provisioned initially and pass the health checks once it’s running in the farm.

As of this writing, the documentation for adding MinRole functionality to a custom service consists of a sparse bit of text at the end of a single MSDN article: https://msdn.microsoft.com/en-us/library/mt743705(v=office.16).aspx. There are no examples or sample code to help get developers started down the right path. Even if the simple steps in the article are followed, the resulting modifications will not actually result in a service instance that can be successfully provisioned in a compliant state. To achieve that, a few more steps are required. Below you will find a step-by-step guide to enabling MinRole compliance for any custom service with code samples and explanatory descriptions. At the end of the walkthrough is a link to the source code for a very basic service project that can be used as a template for applying the modifications to your own projects.


Step 1: Preparation

In order to test your code modifications, access to a multi-server farm with at least one server in a defined role other than “Custom” or “SingleServerFarm” is necessary. This is the first hurdle that most developers will encounter, as a standard SharePoint development environment consists of a single server running “all up” with most, if not all, of the default services already provisioned. This won’t be sufficient as the checks for MinRole compliance are ignored when the “Custom” role exists. Deployment will always succeed in this scenario which is not a true reflection of what will happen in a production farm. So adding a second server to the development/test farm running one of the defined server roles other than “Custom”, “SingleServerFarm” or “Invalid” is critical. 

With a second server (or more) in the farm, service provisioning and health check rule execution for each service can be tested. The next action is to FULLY retract and remove all existing service applications (if your service requires them), service instances AND SERVICES. This is an important distinction as any leftover deployment artifacts will skew the provisioning tests. Retracting the WSP may remove the service instance entries from the Services on Server page but unless there is code in an event receiver to also unprovision and delete the underlying service, it will remain resident in the farm until manually removed with additional code or PowerShell commands. Bear this in mind also when updating the production farm after the custom services have been made MinRole compliant.


Step 2: Server Role Verification

In a MinRole deployment, service instances are assigned to roles using the SPServerRole enumeration (https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spserverrole.aspx). This enumeration was available in SharePoint 2013 but it has changed slightly in 2016; there are now seven potential server roles: 

  • Invalid (typically used for database servers)
  • WebFrontEnd
  • Application
  • SingleServerFarm
  • DistributedCache
  • Search
  • Custom

(NOTE: The “SingleServer” role from 2013 is now obsolete.)

Each service instance should be assigned to one or more of these roles by overriding the ShouldProvision method of the base SPServiceInstance class. This method takes a server role enumeration value as a parameter and returns a boolean indicating whether or not the service instance should run on the specified server type. For example, a service instance that runs on web and application servers should include code similar to the following:


public override bool ShouldProvision(SPServerRole serverRole)

{

     return SPServerRole.Application == serverRole || SPServerRole.WebFrontEnd == serverRole;


Bear in mind that the “Custom” role does not have to be specified as all service instances will run on servers of this type. If no value is specified or this method is omitted, the base implementation will return ‘false’. This method does not take the place of optional methods that check for specific service applications or other service instances. If, for example, a custom service should only run on servers that have the “Central Administration” service instance provisioned then an additional check is required in a separate method called during provisioning (and example of such a method named “IsSupportedServer” can be found in the linked sample project).


Step 3: MinRole Integration

The service class itself must also be modified so validation calls for MinRole compliance return the correct result. This is achieved by setting the AutoProvision property of the service class (https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spservice.autoprovision.aspx). When the AutoProvision property on a service returns true, the calling assembly will then pass the farm roles to the ShouldProvision method of the service instance.  For each true value returned from the service instance, the service will be provisioned on each server of that type in the farm. If it returns ‘false’, the service will be provisioned only on servers with the “Custom” role.

There are several important factors relating to the use of the AutoProvision property that developers need to be aware of. First, if a service application is associated with a service then the value of AutoProvision is treated as always returning ‘true’. If there are no service applications then the value is treated the same as the default from the base SPService class which is ‘false’.

Second, health check rules have no effect in cases where the server role is “Custom”. All servers on a “Custom” server will remain provisioned when the rules are executed; likewise, if they are in an unprovisioned state then they will no attempt will be made to provision them. This is contrary to all other role types and can lead to much confusion in a development environment where the primary server almost always has the “Custom” role; it is not unusual to see a service instance being provisioned and unprovisioned on some servers and not others during testing (and, to complicate matters further, it is common for an instance that has been unprovisioned as the result of a health check to get blocked from re-provisioning in Central Administration). In these cases, PowerShell can be used to fully unprovision and remove the offending service instances and services. 

Third, the timing of setting the AutoProvision property within the provisioning lifecycle is critical. A common practice for avoiding duplicate service instance provisioning is to perform a check in the feature receiver or other code to evaluate the “Status” property of the service; if it is set to “SPObjectStatus.Online” then provisioning can be skipped, otherwise provisioning proceeds. To make this work correctly, the service class should have its “Status” property set to “SPObjectStatus.Offline” in the default constructor. This is due to the way service classes are instantiated, which requires a new instance to be invoked and subsequently updated before the provisioning methods can be called. If the default state is online then provisioning will never proceed; conversely, if it is never set to online after successful provisioning then duplicates are possible (which cause all sorts of problems with the correct processing of the MinRole health check rules).

Service status plays an important role when it comes to setting the value of the AutoProvision property. If the guidance in the documentation is followed, the AutoProvision property is set in the default constructor. This works fine when only a single service instance exists on any one server but as soon as a duplicate is introduced (by not following the practice mentioned above) the service becomes orphaned and unprovisioning will only succeed by forcibly removing the timer job that the health check rule (or a direct call to the unprovision method of the service) invokes. Furthermore, setting the AutoProvision status in the default constructor of the service is actually out-of-band with respect to the overall provisioning process. It is not required when the service instances are provisioned from the service itself (as seen in the next section) but rather when the MinRole compliance checks are run; if the service instance provisioning fails and the AutoProvision property is already set to ‘True’ then the health checks are going to try and re-provision service instances that cannot actually be activated, resulting in a never ending loop of failed provisioning. Therefore, it is more appropriate to set the AutoProvision property AFTER the initial service instance provisioning when the status is known to be good (or, in the case of the provisioning timer jobs, assumed to be good) and the service switched to an online state. 

(NOTE: Some examples found on various sites and blogs show the AutoProvision property being set on the base SPService class instead of the derived service class. Some even indicate it should be overriden from the base class. This is incorrect. The value should be set on the instance the property is scoped to, which is the derived service class.)  


Step 4: Provisioning  

It is worth revisiting the provisioning process for custom services as the health check rules for MinRole compliance (referred to as “Server role configuration isn’t correct” in the list of rules in Central Administration) will automatically provision and unprovision service instances without user intervention. Plus, the initial provisioning process during deployment can cause problems during development if the service itself isn’t removed properly or a localized deployment is assumed without testing for multiple instances (a common mistake when everything runs just fine in the “Custom” role).

The “Provision” method of a service class is really just an entry point for the deployment of service instances. Although an underlying service will exist in the farm (easily seen by running the Get-SPService command in PowerShell) it is actually the service instances that do the real work. There are two ways to provision a service instance: 1) locally on the server the code is running on, and 2) a pre-defined timer job. In a multi-server production environment both are actually necessary, whereas in a typical single-server development environment the first method is sufficient. These can be combined in a single method that performs both operations. First, the SPServiceInstance.Provision() method can be called for deployment to the local server, then the instance can be passed to a new SPServiceJobDefinition for scheduling. Optionally, the code can wait for the jobs to complete, which provides an added layer of assurance when setting the AutoProvision property.

In the sample project, this logic is contained within the ProvisionServiceInstance method of the ProvisioningUtility class (which is only abstracted for maintainability - it can be included directly in the service class code if necessary):


public static void ProvisionServiceInstance(SPServiceInstance instance, bool wait)

{

     if (instance.Status == SPObjectStatus.Offline || instance.Status == SPObjectStatus.Disabled)

     {

         if (instance.Server.Id == SPServer.Local.Id)

         {

             instance.Provision();

         }

         else

         {

             SPServiceInstanceJobDefinition definition = new SPServiceInstanceJobDefinition(instance, true);

             definition.Schedule = new SPOneTimeSchedule(DateTime.Now);

             definition.Update();

         }


         if (wait)

         {

             WaitForServiceInstanceStatus(instance, SPObjectStatus.Online, new TimeSpan(0, 1, 0));

         }

     }

}


Failure to provision a service instance in a multi-server environment via the provided timer job will most likely result in an error and cause the service instance to fail the MinRole compliancy check. As such, it is a good idea to add some additional code in the service class to ensure the timer jobs are properly initiated. This can be done in the “Provision” method of the service, which also creates the service instances on each farm and initiates provisioning (which are actually two separate processes). For example:


public override void Provision()

{

    EnsureServiceInstances();

    ProvisioningUtility.ProvisionServiceInstances(Instances, false);

    ProvisioningUtility.EnableTimerJobs(JobDefinitions);


    Status = SPObjectStatus.Online;

    AutoProvision = true;

    this.Update();

}


private void EnsureServiceInstances()

{

   SPFarm farm = SPFarm.Local;


   if (farm == null)

   {

       throw new Exception("This server is not part of a farm.");

   }


   foreach (SPServer server in farm.Servers)

   {

       if (DemoServiceInstance.IsSupportedServer(server))

       {

           DemoServiceInstance serviceInstance = DemoServiceInstance.GetServiceInstance(server);


           if (serviceInstance == null)

           {

               serviceInstance = new DemoServiceInstance(Guid.NewGuid().ToString(), server, this);

               serviceInstance.Update();

           }

       }

   }

}


Note that each service instance, on each server, is first instantiated and then subsequently updated (which commits the instance), followed by the actual provisioning on the local server and via the timer job as seen in the previous code sample. It may be tempting to invoke provisioning directly on the service instance after instantiating it in the for…each block using the current server reference; however, provisioning will fail with a conflict error if this is attempted. The correct way to provision service instances across multiple servers is by creating a new timer job instance using the SPServiceInstanceJobDefinition object.  Also note that the AutoProvision property is set after provisioning of the service instances is complete and the service status is set to online (both of which are applied simultaneously in the service “Update” method, so the order could be swapped without impacting the final result).


Step 5: Unprovisioning

During development, it is likely that multiple attempts will be made to provision the service while working out bugs in the code. Doing so requires deleting the service itself in addition to removing the service instances. A retraction of the solution package will not achieve this unless code is added to the FeatureUninstalling event in the feature receiver (alternatively, this can be done via PowerShell if a feature receiver is not being used to provision the service to begin with). Failure to remove the underlying service will result in service instance provisioning failures, further complicated by the automatic unprovisioning attempts made by the MinRole health check rule. The following example demonstrates automatic removal of the service in a feature receiver:


public override void FeatureUninstalling(SPFeatureReceiverProperties properties)

{

    DemoService demoService = DemoService.GetService();

    if (demoService != null)

    {

        demoService.Delete();

    }

}  


Step 6: Other Considerations

The remainder of the code in the sample involves standard service-related methods for unprovisioning, deletion, creating a link to a service configuration application page, and so on. Notably absent from the sample is implementation of the “IsReadyForRoleConversion” method as described in the documentation on TechNet (https://technet.microsoft.com/en-us/library/mt743705(v=office.16).aspx​). The text relating to that method was only recently added and the method does not exist in the base SPServiceInstance class in builds prior to November 2016. It can only be assumed that an update will include this functionality, so code modifications will be necessary when the appropriate update is installed in farms that contain custom services. The method itself is simple enough, producing a custom error message that provides an error message in Central Administration containing instructions regarding management of a service instance when a server role changes to one that the service does not support.


In conclusion, converting existing custom services to be MinRole compliant is not a complicated task but there are some important points that need to be considered prior to making code modifications. To begin with, developers must have a properly configured environment for testing, which is not likely to be the case in most organizations. Once this hurdle has been cleared, completely removing any existing service artifacts is essential to establishing a clean baseline for MinRole validation. Assuming that provisioning is already being handled in the proper manner for multiple server deployment, the code changes come down to one additional method in the service instance class and a new property setting in the service class. Naturally, if provisioning isn’t being handled properly in the existing code, further modifications will be necessary to change the provisioning behavior and sequence; however, overall time to complete the task should be minimal if the steps outlined herein are followed in the correct order. Developers should take note of the fact that these steps differ slightly from those described in the TechNet documentation with regards to the point in the service instantiation and provisioning lifecycle at which the AutoProvision property should be set for the most consistent results.


The sample project can be downloaded from the following GitHub repo: https://github.com/eshupps/SPDemo.Services.MinRole​​




Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ 

September 26
New Article - Integrating the SharePoint Framework into Your Custom Development Strategy

​With the introduction of the SharePoint Framework, organizations now have an entirely new option to choose from when designing custom extensions for their cloud and on-premise collaboration solutions. Although this new model provides additional capabilities and addresses many of the feature gaps present in Azure Web Applications and SharePoint Add-Ins, it also makes the development landscape more complicated than ever before, inevitably leading to confusion around which model to use for specific business scenarios. Integrating the Framework into an existing development strategy involves several key decision points and a deeper understanding of why it was introduced and what specific problems it was created to solve.​

Read the full article on SPTechReport >>​



Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ 

August 17
Announcing the SharePoint Framework Developer Preview

At the Future of SharePoint event earlier this year, Microsoft announced a new customization model known as the SharePoint Framework. Naturally, this led to much speculation and discussion as the bits were only available to a few early beta testers and not much was provided in the way of samples or documentation. A number of us who were involved in the early discussions around this new model have been promoting it heavily but without much to show developers have taken a "wait and see" approach.

Well, I have some good news for you - the wait is finally over. The SharePoint Framework is now available as a Preview on GitHub. You can check it out at the following link:

https://github.com/sharepoint​​

There are some things you need to know before jumping into the code and creating the greatest new widget the SharePoint world has ever seen. First of all, this is a Preview - things can (and likely will) change before the final release is ready. More importantly, if you aren't comfortable creating web solutions outside of Visual Studio you have a bit of a learning curve to go through before you can become productive. You'll have to get up to speed using tools like VS Code, Node.js, Yeoman generators and Gulp (hopefully this is a short-term limitation and the engineering team will ship a version fully integrated with the full Visual Studio product soon). Finally, you'll have to get used to testing in a the new workbench environment and deploying assets to CDN's instead of SharePoint sites. For a full rundown on all the things you need to know to get started, read this post from Chris O'Brien

I'll be posting much more on the Framework over the coming months, especially regarding how it fits in with the other SharePoint development models and where it can best be used to enhance new or existing deployments. For now, head on over to GitHub, clone the repo and have a go yourself. 

The future is now. Let's do this!



Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 

Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ ​​

  

August 05
Zero Downtime Patching in Action
Whenever I talk about high availability in SharePoint and SQL Server Always-On Availability Groups I inevitably get a bunch of questions about the "zero downtime" patching concept first introduced at last year's Microsoft Ignite conference. There seems to be a great deal of confusion about this topic, with many people wondering how it works, what happens to servers while they are being patched, how database updates are applied, and numerous other potential issues. On the surface, it sounds simple enough but as with everything SharePoint-related the devil is in the details.

Thankfully, some of the great folks at Microsoft who regularly deal with enterprise customer issues have put together a video guide on this topic to dispel any confusion about how it all works. Neil Hodgkinson, Bob Fox and Karl Reigel do an excellent job breaking down the requirements, explaining the functionality and then showing how it all comes together in a step-by-step video walkthrough. This should go a long way towards clearing up any confusion and helping customers understand how to take advantage of this functionality in the higly available SharePoint farms. Job well done, guys - thanks for putting this out there.

Here is the link to the video and accompanying article:






Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 

Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ ​


August 01
Think Twice Before Including Video Content in Office or SharePoint Add-Ins

Let us say, for the sake of discussion, that you, being an enthusiastic SharePoint and Office developer, have just created a shiny new Add-In and you are ready to unleash it to the world. As we live in modern times where everything is on YouTube and selfies have been replaced by live video from smartphones, you decide to include some whiz-bang video content in your super-duper new add-in. Good plan. You drop a simple HTML5 video tag into your code and, because you are one of the cool kids who gets this whole cloud thing, you even include the nifty Azure Media Player, encode your content into multi-bitrate streaming video format, and are all good to go. A quick bit of testing on various browsers in Windows 10 and Mac OS X shows that everything is working as expected so off you go to the Seller Dashboard in hopes that the lives of users all across the globe will soon be enriched by your brilliant bit of coding genius.

Imagine your surprise when, after what seems to be a much longer delay than is really necessary, you receive a crushing rejection email that dashes your hopes of impending global domination. The email states that your add-in cannot be approved because you failed to follow the submission guidelines. Specifically, you are informed that you have violated section 4.12.1, which states "Your app or add-in must be fully functional with the supported operating systems, browsers, and devices for Office 2013, Office 2016, SharePoint 2013 and Office 365". It goes on to say that "Add-ins must be compatible with all versions of Internet Explorer 11 and later, and the latest versions of Chrome, Firefox and Safari (Mac OS)". But, you think to yourself, you did, in fact, test on all the various browsers and everything worked as expected. So what gives? If you are lucky enough to be able to get additional details from the Office Store validation team, which will almost certainly not be included in the initial rejection email, you might just get them to tell you something like the following: "We encountered an error in the Office web app, when using your add-in with Internet Explorer 11 on Windows 7".

Wait, what? Windows 7? That was two full OS releases ago! Add-ins weren't even around then so how could you be expected to support an OS that shipped in 2009?!? Ridiculous or not, the 2013 versions of Office and SharePoint are both supported on IE 11 in Windows 7, so you dig out an old Windows 7 disc and spin up a virtual machine to discover that, sure enough, an error gets thrown when the player tries to load your video. Obviously, since IE didn't fully support HTML5 until about the time Windows 10 shipped, the player is trying to fallback to Flash or Silverlight, so you load up both plugins and still get the same error. No matter what you try, you see messages like "One or more blob URLs were revoked by closing the blog for which they were created" or "error: videojs: (CODE:274726913 undefined) [object Object]". And yet, opening the video in a separate tab or window works just dandy, so it MUST be something about trying to play video inside an add-in directly, right?

A quick search through the Office Dev Center produces no results about blocked plugins like Flash or Silverlight. Likewise, no mention whatsoever of any such limitations in the Office Store validation rules. That leads you to think that perhaps it is some sort of security limitation, so you dig up the URL for the Privacy and Security of Office Add-Ins​. Scanning down the page, in a section innocently titled "Web clients", you come across the following text: 

"In supported Web clients, such as Excel Online and Outlook Web App, Office Add-ins are hosted in an iframe that runs using the HTML5 sandbox attribute."

And there's even an image to go along with it:

Office Add-In IFRAME sandbox attribute

And that's when it hits you. Due to the lack of support for the HTML 5 video tag in IE 11 on Windows 7, the player is trying to load your video content in a plugin, which is blocked inside of an IFRAME that only has the raw "sandbox" attribute. But, you think to yourself, surely the smart folks at Microsoft included the "allow-plugins" parameter in their code, right? After all, it's been around for years already, and backwards compatibility almost demands its inclusion. So you pop open an Office web app in your browser, click your add-in link and inspect the source, only to find the following markup in the parent IFRAME:

... sandbox="allow-scripts allow-forms allow-same-origin ms-allow-popups allow-popups" ...

You stare at it in disbelief. Surely, those whiz kids at Microsoft did not make a conscious decision to allow scripts, popups and forms while blocking plugins? And then fail to clearly mention this in the Office Store validation rules documentation? And further fail to point it out in the rejection email when the testers obviously knew your plugin included videos and they were trying to test it in IE 11 on Windows 7? That just couldn't happen, right? 

Welcome to the new way, my friend. They say it's much better than the old way. In the new way, you get to add exclusionary scripts and either block key portions of your content or add a sad little text box telling users on older platforms that they have to manually open a new browser window and copy a hyperlink to view said content. And such things will be poorly documented and excluded from test results so you can spend endless hours trying to decipher vague emails, wondering why you aren't allowed to commercialize your applications in a store where it takes weeks to get any kind of response, strict naming and style conventions are applied to you but not to "official" apps, purchase options are limited, marketing is non-existent, value is arbitrarily decided, and licensing is a complete catastrophe. 

Are we having fun yet???



Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ ​

May 05
New Article - The SharePoint Customization Conundrum

My latest article for SPTechReport is now online:

"Back in the early days of mass SharePoint adoption (circa 2008), the most popular request from customers was to make their shiny new intranet “not look like SharePoint.” It was a reasonable request; after all, the product was created to simplify online collaboration and document management, not to be an internal marketing platform, so the user interface naturally reflected a utilitarian design aesthetic that many found less than appealing. Of course, that didn’t stop customers from taking the platform in all sorts of directions it was never meant to go, the result of which was a proliferation of highly-customized implementations that required a great deal of time, money and custom code to deliver..."

Read the full article on SPTechReport >>​


Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

SmartTrack Operational Analytics for SharePoint​​ 


May 04
Introducing the SharePoint Framework
Today Microsoft officially announced an exciting new set of technologies known as the "SharePoint Framework". Developers everywhere should be excited about this announcement as it means we (finally!) have a proper modern web development experience for SharePoint publishing both on-premises and in the cloud. To learn more about what the SharePoint Framework is and how it functions, have a look at the overview video and read this blog post​ from the SharePoint team.

At this point, things are still in motion and there is no official release date other than "sometime later this year" but there are already a number of things to get really excited about if you are a SharePoint developer. Here are my top five in no particular order:

1. Modern Web Development Experience. We can finally move out of the dark ages of classic server-side ASP.NET development for native customizations. No more WSP's and endless IISRESET operations. We don't even need a SharePoint server to preview our customizations as the new SharePoint Workbench utility provides a sandbox test environment right on our desktop. Everything - and I do mean everything - is HTML and JavaScript based, leveraging the remote API's for platform integration and a whole new set of extensibility components for things like data management, caching and authorization. Even better, it's all open source so developers can use any JavaScript framework they like for deep, natively-supported customizations. 

2. Everything Old is New Again. Unlike Add-Ins and Azure Apps, which are designed to run outside of the pages and parts users normally interact with, the SharePoint Framework has been built from the ground up as a proper publishing platform for in-context customizations. Developers will still be working with pages and web parts, only they'll be doing it in client-side code that is responsive and mobile out of the box. There may be a bit of a learning curve for classic ASP.NET developers but the underlying artifacts are still very similar - build a web part, add it to a page, display data via the API's. This should be much easier to grasp for the average SharePoint developer than things like provider-hosted add-ins and standalone web apps. The knowledge delta is more about tools than it is about components and that's a good thing. Concerned about learning all this JavaScript stuff? Don't be - there will be plenty of guidance to get you up to speed. Heck, the SharePoint team themselves had to learn it in order to to build the new site experience - if they can do it then so can you.

3. First Class Customizations. In the add-in model customizations were an afterthought at best, with poorly implemented integration points and barely functional shims. The SharePoint Framework was expressly designed to solve these challenges by making client-side code a first-class citizen within the rendering framework. This isn't just a bolt-on it's the core technology underlying many of the new UI experiences. Developers will be building customizations in exactly the same way the SharePoint team themselves are building them. It doesn't get any more "supported" than that!

4. Microsoft Gets It. No, I'm not trying to be funny - they really do get it. Developers want to use modern frameworks like Angular, Reach, Knockout and so on. They don't want to spin up full server virtual machines just to create a quick web part. The web has moved on from the old server-side days and it's high time we caught up with what everyone else is doing. Nobody feels that pain more than the engineering team that has to build and maintain all that heavyweight legacy code in the first place. Internally, Microsoft is embracing the modern web and the greater SharePoint development community is the primary beneficiary of that change. And it's only going to get better from here on out.

5. Customizations are Cool (Again). Remember way back in the day when the "in" thing was to make SharePoint not look like SharePoint? (Ok, so it wasn't actually way back in the day - everyone still wants that)  Well, here we are in 2016 and all of a sudden deep customizations are back in vogue. With the SharePoint Framework, developers and designers can work together to build a completely customized intranet from the ground up with exactly the look and feel they want and have it run on-premises or in the cloud. Fully supported. No more fighting with the complexities of the SharePoint page structure, battling interdependent server controls or wading through mountains of conflicting CSS, either - mock it up, code it, style it and go. And you don't even have to know all the aracane inner workings of SharePoint to make it functional. Now that's cool! 


So there are my top five things to like about the new SharePoint Framework. No doubt there will be a ton of questions about the particulars as this gets rolled out to first release tenants later this year. But for now I think we can all be thankful that Microsoft has heard our concerns and given us an exciting new platform to build some really cool stuff on top of SharePoint. 


Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

​​​​​​

April 22
Configuration Challenges with Office Web Applications, SSL Offloading and Default Zone URL's

Configuring Office Web Applications in a development or test environment is a pretty straightforward process: install the server bits, run a bit of PowerShell to create a WAC farm, set your WOPI bindings in SharePoint, and you're done. But doing so in a full production environment, with high availability, SSL offloading, split DNS, HTTPS redirection, fully-qualified AAM's and multiple web application zones? That's an entirely different ball game.

Consider the following real-world scenario. The SharePoint farm, which has been built following high-availability guidelines for redundancy, has both internal web servers for employees and external web servers for customers. The customer servers are in a DMZ behind a dedicated load balancer, with their own Virtual IP's, but they still have to be reached by internal employees and connect to the rest of the SharePoint farm, so they use a split-DNS scheme to provide name resolution internally and externally. The web applications were created on HTTP, with the appropriate alternate access mappings, as the SSL certificates reside on the load balancer, where any incoming requests from the outside network get automatically redirected to HTTPS but are passed to the servers as standard HTTP.


Office Web Apps Farm in DMZ with SharePoint 2013/2016

In this configuration, internal employees can browse to http://extranet.contoso.com without the need to manage certificates on the servers themselves and without ever transiting the corporate firewall. External users who forget to type "https" before hitting the extranet site get conveniently redirected to a secure connection on a load-balanced IP address and the web server traffic is isolated from the rest of the farm. So far, so good. But what about giving customers and partners the same online document viewing/editing experience as internal users? Wouldn't it be great if they could also take advantage of the Office Web Applications functionality in SharePoint? Yes, it would - but getting it to work properly is going to take a bit more planning than usual.

The first challenge is determining where the WAC servers should reside. For security and traffic isolation purposes it is best to co-locate the WAC farm with the SharePoint servers in the DMZ. Placing them on the internal network will result in direct client connections to internal resources when users open documents in Word/Excel/PowerPoint online; a situation which should be avoided and which would likely fail a security audit. It's one thing for web servers to communicate to the back end via their own connections but quite another when external users are hitting URLs from an outside network that should otherwise be off-limits.

There are, however, some significant ramifications to what is otherwise a clear choice. Since there is a one-to-many relationship between WAC farms and SharePoint farms, it means that the WAC farm must either be located entirely in the external zone or have at least two servers in the internal zone and two in the external zone (for minimal redundancy). Spanning the zones requires virtual IP's on both the internal and external load balancers along with corresponding DNS entries in both zones. Configuring the WAC farm will also require some creative DNS configuration and port rules, as the servers have to be aware of each other and be able to communicate on port 809 (if you are using the standard configuration of HTTP on 809 and HTTPS on 810). Alternatively, the entire WAC farm can reside externally and internal users would simply get flipped over to HTTPS when they try to open documents in the browser.

Configuration user connectivity to the WAC servers is actually quite simple; however, getting the WAC servers to talk to SharePoint is not quite so easy. When WAC receives a client request for a document, it attempts to load the document from the SharePoint site the user was browsing when they made the initial request. If the user is external, that means the request will go from https://extranet.contoso.com to https://wac.contoso.com with the load balancer handling the SSL portion of the communication and passing it to WAC as a standard HTTP request. This means the WAC farm must have been created using the "SSLOffloaded" switch, the WOPI zone set to "External-HTTPS" (so it creates both internal and external bindings), and the WOPI binding configured with the "AllowHTTP" options. WAC then has all the proper settings to make the request to SharePoint for the desired document.

Or does it?

This is where things start to get very interesting. The way WAC functions, it will only make a request to SharePoint on the URL for the web application's default zone. With SSL offloaded at the load balancer, this is usually an HTTP endpoint - in this case, http://extranet.contoso.com. But when the WAC servers in the external zone try to reach this URL, they will receive an IP resolution from the external DNS server that points to the external VIP on the load balancer which, when it receives such a request, automatically flips the "http" to "https". Since WAC doesn't know what to make of a redirect happening in the middle of an attempt to retrieve a SharePoint document the request will fail and the user will receive the dreaded "Sorry, there was a problem and we can't open this document" error. Adam Rhinesmith, who is a support escalation engineer on the Microsoft Office Web Client and Services team, pointed this out on a blog post back from 2014 but the ramifications of what he was saying aren't immediately apparent until you come across this general scenario.

So how can we allow both internal and external users to load documents in WAC and still maintain some level of traffic isolation? One option is to by pass the load balancers by pointing the WAC servers to internal SharePoint servers which will respond to the HTTP request using host entries on each WAC machine. Similarly, another web server could be added to the farm in the DMZ but not to the load balancer virtual IP configuration, which would make it reachable only by the WAC servers over HTTP (again, using host entries). But managing host entries, especially in the case of Host Named Site Collections, on each WAC server can be tedious and prone to error. Another option is to point the WAC severs to the internal DNS or have a separate DNS server in the DMZ with internal entries, so that the WAC requests are never routed to the external servers at all. Since WAC only needs to talk to SharePoint and the other WAC servers this is a approach that scales much easier in the context of Host Named Site Collections. In my opinion, the preferred approach is a combination of these two options, with a dedicated SharePoint web server in the DMZ that does not participate in the load balancing pool (the only function of which is to serve WAC requests) and a separate DNS server for name resolution of the WAC farm servers and HNSC hosts. In this configuration, the only WAC-related traffic that crosses from the external to the internal zone is WAC farm communication over port 809 which is, IMHO, an acceptable compromise. One final option, if you have the hardware to support it, is to create a rule on the load balancer not to translate HTTP traffic to HTTPS if it originates from the WAC servers.

NOTE: It is worth pointing out - in fact, SHOULD be pointed out - that if the entire farm were configured for HTTPS as it should be then this situation wouldn't arise at all, along with saving many other headaches and making the overall farm more secure. Bear that in mind before trying to avoid HTTPS due to minimal cost increases or supposed ease of configuration.

So the moral of this story is to carefully consider the configuration of your WAC farm when setting up SharePoint servers in a DMZ where SSL traffic is terminated at a load balancer. If external users cannot open documents in WAC but internal users can then you are very likely running up against the default URL zone HTTP/HTTPS issue. There may be other creative ways to overcome this limitation besides those I've covered above but however you do there needs to be a way for the WAC servers to communicate with the SharePoint servers using the URL and protocol associated with the web application default zone. Otherwise, external users will be forced to use desktop clients to open documents. Surely with all the goodness in Office Web Applications nobody wants that, do they?

​​



Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

​​​​​

March 28
Announcing a New Version of SmartTrack - Operational Analytics for SharePoint

As many of my readers know, BinaryWave​ is primarily in the business of creating and supporting software products for the SharePoint market. Some we do just for fun, some we build for partners and some ​we sell under our own brand. In the latter category, we are especially fond of our industry-leading SmartTrack product - the only operational analytics solution built specifically for SharePoint. 

Over the last few of years we have learned quite a bit about the struggles of maintaining a mission-critical SharePoint environment within the enterprise. One of the most common complaints we hear is the lack of visibility into operational metrics. Even when we expose the mountains of data hiding within the ULS logs and make it easy to visualize, filter and process, there is still a big disconnect between what is happening on the user's desktop and what goes on behind the scenes within a server farm. Although SharePoint is really good (too good, in some cases) at filling up log files with mountains of data, it doesn't tell operational personnel much at all when it comes to the user experience. When an unexpected error occurs and gets flagged in a SmartTrack alert, administrators still have no idea how many people were affected, how the error manifested itself on the front end, or what level of interruption it caused.

We figured there had to be a way to bridge this gap so we set about finding a solution. The obvious answer was to parse the IIS traffic logs and extract the data into a more usable format. But that information alone, which is readily available from a multitude of commercial and open source tools, isn't sufficient. It has to be viewed in context with operational data from the server itself and the entire SharePoint stack. It's no good knowing a user received a 404 error without also knowing what SharePoint was doing at that precise moment which might have contributed to the problem. Plus, administrators need to know how often such errors happen and what overall effect they have on farm health metics. All those uptime tools are nice but downward red arrows on a dashboard don't actually help sysops track down the root cause of a problem.

So rather than offer another "me too" IIS log parsing utility, we decided instead to treat client requests as first-class data within the overall farm event stream. After all, shouldn't a 500 Internal Server Error be of equal cause for concern as a "System.NullReferenceException" in a publishing page? And, even better, wouldn't it be helpful to know if the two events occurred at the same time on the same server just after the average SQL query execution counter spikes upwards? Now that's the kind of information that can actually help you keep SharePoint running and prevent excessive downtime.  

After a few months spent solving this particular problem, and a some others we discovered along the way, I'm pleased to announce the general availability of SmartTrack version 1.5. Now, for the first time, on-premise SharePoint customers can easily trace errors from the desktop to the database with just a few clicks. SharePoint ULS logs, Windows Server event logs, performance counters and now IIS logs in one single, integrated event stream. No big monitoring frameworks to install, management packs to configure or rules to write - SmartTrack starts working immediately and its advanced health algorithms automatically learn farm behavior patterns without user intervention. All in an easy-to-use cloud-based application that looks great on any sized device (for those who can't yet leverage the cloud, don't worry - there's a fully on-premise version just for you). SmartTrack is the only operational analytics solution designed from the ground up for SharePoint administrators, system operations personnel and managed service providers.

Here are some highlights from the new version:

  • IIS log event capture across all web servers in a farm. 
  • New “servers at a glance” display in the SmartTrack dashboard that summarizes the operational health of servers in each farm right on the main page. 
  • New “Web” category for sorting, filtering and searching IIS log events in the details panel.
  • Improved event categorization and alerting.
  • For on-premise customers, an all-new management service that automates database maintenance operations to ensure optimal performance.
  • Dedicated mail server that eliminates the need for Exchange relay setup and outdated Windows Server SMTP configuration.
  • Improved event exclusion processing.
  • Enhanced user creation for secure on-premise farms that are not connected to the public internet.
  • Numerous bug fixes and performance enhancements.


Interested? Visit the SmartTrack product page​ and request a free trial. When you're ready to buy, request a quote using offer code "SPCOWBOY" for a 20% discount. Get your SharePoint environment under control and save money at the same time - can't beat that!




Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 

​​​​​

January 29
Document Sharing with the REST API in a SharePoint Add-In

Office 365 makes sharing documents with other users extremely easy - just select the item, click ’Share’ and enter an e-mail address or pick a user from the directory - job done. Unfortunately, replicating this behavior in code is not quite so simple. Vesa Juvonen has demonstrated an approach to this using CSOM and there is an associated sample in the Office Dev PnP repo on Github. But what if you need to do this with only HTML and JavaScript, say in a SharePoint Hosted Add-In or mobile application?

Fortunately, there is a REST​ endpoint for this but it is not well documented and the internal workings are a bit obscure. To begin with, you need to send a properly formatted JSON object to the proper REST endpoint in the app web. Unlike most other calls that use the app web as a proxy for resources contained in the host web, calls to the sharing endpoint do not require a parameter for the target (host) web nor does it follow the standard convention of specifying the “/web/“ path in the request URL after “/_api/“. Much like calls to the user profile service or search, this endpoint is directly accessible, which is a big reason why figuring out how to use it is difficult; standardization of RESTful endpoints and related documentation outside of the unified O365 API’s seems to be lagging behind.

In any event, constructing the URL is quite simple compared to other REST calls that require the host web token:

   var reqUrl = appWebUrl + "/_api/SP.Web.ShareObject”;

The above example assumes that your code uses the commonly documented methods for extracting the app web URL from the request parameters (for more information on how to do this, see here). However, knowing the endpoint is only half the battle. The other challenge is figuring out how to properly format the JSON data object that will be POSTed to the specified endpoint. This object has the following parameters:

   url: The full URL of the document to be shared.
   peoplePickerInput: A JSON object containing required user key/value pairs (more on this below)
   roleValue: An numerical value that specifies the desired sharing option (view or edit)
   groupId: An integer which specifies the site group the user should be assigned to.
   propagateAcl: A flag to determine if permissions should be pushed to items with unique permissions.
   sendEmail: A boolean value (‘true’ or ‘false’) used by the system to determine if the target user should be notified of the sharing operation via email.
   includeAnonymousLinkInEmail: A boolean value (‘true’ or ‘false’) which tells the system whether or not to include a link to the document that is reachable by all anonymous users.
   emailSubject: The subject of the notification email.
   emailBody: The body of the notification email.

(The full set of parameters and explanations is described in the following MSDN reference article: https://msdn.microsoft.com/EN-US/library/office/microsoft.sharepoint.client.web.shareobject.aspx)

Of these, the trickiest one to construct is the peoplePickerInputValue. As it is a full JSON object contained within another JSON object, the formatting can be a bit tricky. The entire set of key/value pairs must be contained within square brackets and curly braces and all double quotes escaped and the entire thing assigned to a variable using something like JSON.stringify(). Within the object itself, you must specify such parameters as the full user identifier (claims ID or email address), the type of entity the object represents, the target user’s FQDN, and so forth. When fully defined, the object will look similar to this:

[{\”Key\":\"i:0#.f|membership|user@somedomain.com\",
\"Description\”:\”user@somedomain.com\”,
\”DisplayText\”:\”Test User\”,
\”EntityType\":\"User\",
\"ProviderDisplayName\":\"Tenant\",
\"ProviderName\":\"Tenant\",
\"IsResolved\":true,
\"EntityData\":{\"Title\":\"\",
\"MobilePhone\":\"+1 1234567890\”,
\”Department\":\"\",
\"Email\”:\”user@somedomain\”},
\”MultipleMatches\":[],
\"AutoFillKey\":\"i:0#.f|membership|user@somedomain.com\”,
\”AutoFillDisplayText\”:\”Test User\”,
\”AutoFillSubDisplayText\":\"\",
\"AutoFillTitleText\”:\”user@somedomain.com
\\nTenant\\nuser@somedomain.com\”,
\”DomainText\”:\”somecompany.sharepoint.com\",
\"Resolved\":true,
\"LocalSearchTerm\”:\”user@somedomain.com\”}]

Building this object can be quite challenging, especially if you do not know or cannot get access to all the required information. Fortunately, there is an endpoint you can call from your add-in that returns a properly formatted object you can use in your call to the sharing service. This endpoint resides at the following url:

    http://<AppWeb>/_api/SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.
  clientPeoplePickerResolveUser

To make use of it, you will need to pass in an object that includes a set of query parameters, like so:

var restData = JSON.stringify({

   'queryParams': {

      '__metadata': {

         'type': 'SP.UI.ApplicationPages.ClientPeoplePickerQueryParameters'

      },

      'AllowEmailAddresses': true,

      'AllowMultipleEntities': false,

      'AllUrlZones': false,

      'MaximumEntitySuggestions': 50,

      'PrincipalSource': 15,

      'PrincipalType': 1,

      'QueryString': userId

   }

});

Take note of the “userId” variable - this is the claims identifier or email address of the person the document is being shared with (if the user is not in your Azure AD domain be sure to first enable external sharing in your tenant). POSTing this data to the clientPeoplePickerResolveUser endpoint will return a user object, which can be parsed to obtain the ClientPeoplePickerResolveUser value. This value can then be assigned to the peoplePickerInput parameter.

The entire package then gets POSTed to the app web via the Request Executor. Remember to assign the appropriate value collection to the header value, which includes the “accept”, “content-type” and “X-RequestDigest” parameters. The user interface can then be updated if desired based on the success or failure handlers of the executor. To verify that the sharing operation worked, open the sharing dialog for the document and the user’s ID or email address should be displayed. Below is a full example of a simplified JavaScript function for sharing a specific document (which can also be found in SPRest.Demo repo on GitHub). It requires two inputs, the user email address and the URL of the document, and updates a DIV element on the page with a success or fail message via jQuery.

function shareDocument() {

    try {

        var userId = $("inputEmail").val();

        var docUrl = $('#inputFileUrl').val();

        var executor = new SP.RequestExecutor(appWebUrl);

        var restSource = appWebUrl + "/_api/SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.clientPeoplePickerResolveUser";

        var restData = JSON.stringify({

            'queryParams': {

                '__metadata': {

                    'type': 'SP.UI.ApplicationPages.ClientPeoplePickerQueryParameters'

                },

                'AllowEmailAddresses': true,

                'AllowMultipleEntities': false,

                'AllUrlZones': false,

                'MaximumEntitySuggestions': 50,

                'PrincipalSource': 15,

                'PrincipalType': 1,

                'QueryString': userId

            }

        });

        executor.executeAsync({

            url: restSource,

            method: "POST",

            headers: {

                "accept": "application/json;odata=verbose",

                "content-type": "application/json;odata=verbose",

                "X-RequestDigest": $("#__REQUESTDIGEST").val(),

            },

            body: restData,

            success: function (data) {

                var body = JSON.parse(data.body);

                var results = body.d.ClientPeoplePickerResolveUser;

                if (results.length > 0) {

                    var reqUrl = appWebUrl + "/_api/SP.Web.ShareObject";

                    var executor = new SP.RequestExecutor(appWebUrl);

                    var data = JSON.stringify({

                        "url": docUrl,

                        "peoplePickerInput": '[' + results + ']',

                        "roleValue": "1073741827",

                        "groupId": 0,

                        "propagateAcl": false,

                        "sendEmail": true,

                        "includeAnonymousLinkInEmail": true,

                        "emailSubject": "Sharing Test",

                        "emailBody": "This is a Sharing Test."

                    });


                    executor.executeAsync({

                        url: reqUrl,

                        method: "POST",

                        headers: {

                            "accept": "application/json;odata=verbose",

                            "content-type": "application/json;odata=verbose",

                            "X-RequestDigest": $("#__REQUESTDIGEST").val(),

                        },

                        body: data,

                        success: function (data) {

                            $("#sharingOutput").html("Sharing succeeded for '" + docUrl + "'.").css("color", "green");

                        },

                        error: function (result, code, message) {

                            $("#sharingOutput").html(message).css("color", "red");

                        }

                    });

                }


            },

            error: function (result, code, message) {

                $("#sharingOutput").html(message).css("color", "red");

            }

        });        

    } catch (err) {

        alert(err.message);

    }

}



Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave SmartTrack 
Take the trouble out of troubleshooting.  
Improve service levels and avoid downtime with 
​​​​
1 - 10Next

 
Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave 
 
Eric Shupps Eric Alan Shupps eshupps @eshupps SharePoint Cowboy BinaryWave 


 
Twitter Counter for @eshupps 
 

Eric Shupps LinkedIn Eric Shupps Twitter Eric Shupps Facebook Eric Shupps Google+
 

 




BinaryWave Eric Shupps eshupps The SharePoint Cowboy SharePoint monitoring SharePoint monitoring tool SharePoint metrics SharePoint administratrion SharePoint monitoring best practices SharePoint management SharePoint management tool SharePoint operations SharePoint operationsmanagement SharePoint administration SharePoint administration tool SharePoint SLA SharePoint service level agreement SharePoint operational intelligence SharePoint performance SharePoint performance monitoring SharePoint analytics SharePoint real-time SharePoint intelligence SharePoint ITIL SharePoint service operations SharePoint uptime SharePoint alerts SharePoint health SharePoint tools SharePoint metrics SharePoint diagnostics SharePoint SmartTrack SmartTrack Operational Intelligence


Copyright © 2013 BinaryWave, Inc. All rights reserved.
This site is brought to you by BinaryWave in cooperation with Eric Shupps Eric Alan Shupps eshupps @eshupps The SharePoint Cowboy. We hope you enjoy the SharePoint-related content on topics such as performance, monitoring, administration, operations, support, business intelligence and more for SharePoint 2010, SharePoint 2013 and Office 365 created by Eric Shupps The SharePoint Cowboy. We also hope you will visit our product pages to learn more about SmartTrack, Operational Analytics for SharePoint, SharePoint monitoring, and SharePoint administration, while also discovering great offers from our partners. Please visit the blog of Eric Alan Shupps, Twitter handle @eshupps, for more information on application development, the SharePoint community, SharePoint performance, and general technology topics. Eric Shupps Eric Alan Shupps eshupps @eshupps The SharePoint Cowboy is the founder and President of BinaryWave, a leading provider of operational support solutions for SharePoint. Eric Shupps Eric Alan Shupps eshupps @eshupps The SharePoint Cowboy has worked with SharePoint Products and Technologies since 2001 as a consultant, administrator, architect, developer and trainer. He is an advisory committee member of the Dallas/Ft. Worth SharePoint Community group and participating member of user groups throughout the United Kingdom. Eric Shupps Eric Alan Shupps eshupps @eshupps The SharePoint Cowboy has authored numerous articles on SharePoint, speaks at user group meetings and conferences around the world, and publishes a popular SharePoint blog at http://www.binarywave.com/blogs/eshupps.