Saturday, September 20, 2014

Site collection stuck in read-only

This happened to me recently during an stsadm backup.  I was copying and pasting some code in a text file and I accidentally pressed ctrl+C in my stsadm window, killing the backup command.  Ooops! Since stsadm backups will set the site collection to read-only (unless you include the -nositelock flag), my site collection was stuck in read-only. This is a big problem in SharePoint 2013 since you cannot change this in Central Admin because the radio buttons to unlock a site collection are greyed out.  Oh no!

Beginning with the April 2013 CU you can run a simple powershell command to unlock a site collection. 
$Admin = new-object Microsoft.SharePoint.Administration.SPSiteAdministration('http://weburl/sites/sitecollectionurl’);
$Admin.ClearMaintenanceMode();

In my case I was on an RTM farm, so the ClearMaintenanceMode() command doesn't work.   Luckily there is still a way to fix it with a different powershell script.
$site = $get-spsite https://url;
$site.GetType().GetProperty("MaintenanceMode").GetSetMethod($true).Invoke($site, @($false));
Running this will unlock your site collection and you're back in business.

Thursday, June 5, 2014

User Profile Synchronization Service Stuck on Starting

This is a collection of issues I've encountered recently with the User Profile Synchronization Service not starting up. There are other issues that are much more common that I'm not including here since they are covered on a bazillion blogs by now, plus there is the very thorough guide by Spencer Harbar http://www.harbar.net/articles/sp2010ups.aspx. These resolutions are assuming that you've followed all of the recommendations by Spencer and you have no other issues in your environment. When I've had to troubleshoot environments set up by someone else the most common issue is access; make sure your farm admin account is a local administrator and can log on as a service, can long on as a batch job, etc.

Duplicate Certificates
If the sync service has failed at least once, it is possible that the certificates were created and not removed, and not the setup is throwing an error because the certificates are already there. Below are a few of the errors that you might see here. Also, if you search for ILM Configuration in the trace logs and the last row you see is "ILM Configuration: Configuring certificate." your problem could be the certificates.


Event IDs 3 & 6309

The server encountered an unexpected error while performing an operation for a management agent. Microsoft.ResourceManagement.ResourceManagementException: Exception from HRESULT: 0x8023060F ---> System.Runtime.InteropServices.COMException (0x8023060F): Exception from HRESULT: 0x8023060F at MIISRCW.IMMSManagementAgent.ModifyMAData(String pszMADataXML, String& ppszUpdatedXML) at Microsoft.ResourceManagement.SyncConfig.SetMaData(Guid maGuid, String maData) at Microsoft.ResourceManagement.ActionProcessor.SyncConfigActionProcessor.Update(Guid objectId, CultureInfo locale, IList`1 updateParameters, Guid cause)

Solution: Remove the certificates

Steps
1) Go to the start menu and run mmc
2) Select File then Add/Remove Snap-ins
3) Select Certificates from the list and select Service Account. When prompted for the service account, select the Forefront Identity Manager Service
4) Click OK
5) Expand the Certificates section and you should see 8 folders. Look through each folder and find an certificate named Forefront Identity Manager. If you find one, delete it.
6) Repeat steps 4-6 but select the Forefront Identity Manager Synchronization Service
7) Repeat steps 4-6 but select the computer account
8) After all certificates have been deleted, run an IISreset and restart the SPTimerV4 service.
9) If necessary, stop the provisioning of the Synchronization Service (steps below if necessary)

MSIInstaller 1001, 1004, 1005 Warnings

These errors are because the Network Service does not have access to the appropriate folders. Simply give the Network Service account access to the C:\Program Files\Microsoft Office Servers\15.0 directory. It will need Read & Execute.

Failed to connect to server. Error: 0x80070005 Detection of product '{90150000-104C-0000-1000-0000000FF1CE}', feature 'PeopleILM' failed during request for component '{9AE4D8E0-D3F6-47A8-8FAE-38496FE32FF5}'

Detection of product '{90150000-104C-0000-1000-0000000FF1CE}', feature 'PeopleILM', component '{1C12B6E6-898C-4D58-9774-AAAFBDFE273C}' failed. The resource 'C:\Program Files\Microsoft Office Servers\15.0\Service\Microsoft.ResourceManagement.Service.exe' does not exist.

Event ID 234 - Warning creating certificate

This can sometimes be a false alarm. If you see this error and the service is still starting, don't take action yet. If the service fails to start this could be a combination of the first two issues

Service is stuck on Starting for more than 10-15 minutes

The service can take 10 or more minutes to start up on a good day, so be patient. However, if you've encountered some of these errors in the event viewer and you're not seeing anything new for ILM Configuration in the ULS logs you may need to stop the provisioning of the synchronization service. You can do this by running the following powershell commands

add-pssnapin Microsoft.sharepoint.powershell
$id = get-spserviceinstance | where {$_.TypeName -eq "User Profile Synchronization Service"}
Stop-spserviceinstance -identity $id

After stopping this you can resolve the issues you encountered and try it again

Thursday, May 9, 2013

C drive filling up with nsebin and nvebin files

If you're noticing a ton of tmp files in c:\windows\temp\ that are around 300mb and start with nsebin or nvebin, this is likely an issue with the Norman update engine used by Forefront. Microsoft is aware of this issue and is looking into a fix, but until then you may need a temporary resolution. The simplest fix is to disable the Norman engine update as long as you are using other engines. You can do this through the UI, or by using the following powershell script via the Forefront Management Shell. The first step in this script may take a few minutes to complete, so be patient. set-fsspenginemanagement -OverrideAutomaticManagement $true set-fsspsignatureupdate -engine norman -EnableSchedule $false get-ChildItem $env:WINDIR\temp\*.* -include nsebin* | foreach ($_) {remove-item $_.fullname} get-ChildItem $env:WINDIR\temp\*.* -include nvebin* | foreach ($_) {remove-item $_.fullname} ***Update*** Microsoft has fixed the issue. Details below from: http://social.technet.microsoft.com/Forums/en-US/FSENext/thread/ca55530e-3850-49a0-9cd6-2ffd562301ce#cc713345-acca-458b-9bfe-4c847f21ceaf What do you need to do?: Just wait for the next scheduled Norman engine update or manually initiate one. In fact, it may have already taken place depending on your current engine update schedule. Possibly remove a few nsebin.def files manually that remain after the update takes place. This will be a one-time action after the update and depends on if you’ve disabled the Norman engine updates over the past few days while you waited for the fix. If you have disabled the Norman engine updates then you should not need to clean up anything manually because it hasn’t been updating and generating newer files. The fix will remove all of these older files. If the engine updates were never disabled there will be a few nsebin.def files that were created that you can safely remove. The fix is unable to remove these few more recent files. Services do not need to be stopped to remove these older files because they are no longer in use. Nothing additional should need to be done after that moving forward. What the fix will do?: It will create a new directory under the Windows\Temp directory called nsetmp. This will be the new directory for ALL Norman engine related files moving forward. As stated above the fix will remove almost all of the older problematic nsebin.def files (~325mb each) from the Windows\Temp directory. The nsebin.def files from the past day or so will not be removed by the fix BUT are safe for you to delete manually after the update takes place. If you have not cleaned up any of these files yet then this will be a significant amount of files and disk space that will get cleared. No new nsebin.def files will be created in the Windows\Temp directory after this fix is in place. The only nsebin.def files you should see being created from here on is the one current nsebin.def file that will be in the Windows\Temp\nsetmp directory after each update. That is the file that will be in use by the engine. The older nsebin.def files in the nsetmp directory will get removed properly on each subsequent successful Norman update. If you’re running Windows Server 2003 you will not see the nsetmp directory and the nsebin.def files will continue to be written to the Windows\Temp directory. You will not need to take any steps as the previous nsebin.def files will be properly removed. The fix will not remove any Norman version 6.x files if by chance they exist. If you see any of these files they too are safe to delete at any time. These would be the nvcbin.def.xxx.tmp files. How do I know I have the fix in place? The new Norman engine version will be 7.1.8. You will see that as the Engine version value for the Norman engine in the UI in FPE. In FSE you’ll see 7.1 for the engine version in the UI. You might need to check the details of the nse32.dll in the Norman engine bin directory to confirm that you have the fix. The details of that .dll will show a version of 7.1.8.0. However, if the Norman engine has updated successfully any time after this post you can be fairly certain you have the new update.

Thursday, January 31, 2013

Published content types do not show up

In this scenario, you have a content type hub with published content types, but after you run the content type subscriber timer job you are not seeing the content types on the consuming site collection. I am assuming that you have the service application published correctly, permissions are configured, your Managed Metadata Service proxy on the consuming farm is configured to consume from the content type hub (see my post here: http://thatsharepointguy.blogspot.com/2011/11/skipped-content-type-syndication-on.html), and your content type hub is not based on the blank site template. Updating the diagnostics logging by setting Taxonomy (under the SharePoint Server heading) to verbose. Run the content type subscriber timer job again and look for this error message: "Skip site http://siteurl because it does not have taxonomy feature enabled" This is referencing the TaxonomyFieldAdded feature, a little-known hidden feature that is not enabled by default. Enable this feature (stsadm -o activatefeature -name taxonomyfieldadded -url http://siteurl), run the content type subscriber timer job again, and you should be all set. You may need to run this commend for every site collection in your consuming web application.

Friday, October 26, 2012

Expiration Date Not Updating

This is an issue that affected a large number of document libraries in my environment. I have retention policies set on documents that are based on the modified date, yet for many documents the expiration date was not getting updated, or in some cases not getting set at all.  As you can see in this photo, the expiration date on three of the documents was not updated, and six didn't have an expiration date at all.  Viewing compliance details showed the correct expiration date.
 
Without going into too much detail, the compliance details page calculates the expiration date on the fly based on the ItemRetentionFormula for the item.  The Expiration Date, however, is updated by an event receiver.  This is why you have to run a SystemUpdate() on all documents after you enable retention policies.
 
I found that the event receivers were missing from these document libraries. How did this happen with a few of the documents having an Expiration Date?  I don't know, but I do know how to add them back.
 
Luckily, adding the event receivers is very easy to do with a bit of code. For each of the event types you just have to run this code:
list.EventReceivers.Add(SPEventReceiverType.ItemAdded, "Microsoft.Office.Policy, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c", "Microsoft.Office.RecordsManagement.Internal.UpdateExpireDate");
list.Update();
 
Here's the entire piece of code that creates a command line tool to update a library

Thursday, October 4, 2012

Access Denied when deploying a custom retention formula

This happened when trying to activate a feature containing a custom retention formula.

"The SPPersistedObject, PolicyConfigService Name=PolicyConfigService, could not be updated because the current user is not a Farm Administrator."
To resolve this, I changed the RemoteAdministratorAccessDenied property on the Content Service using SharePoint Manager 2010 from true to false.







Monday, August 13, 2012

Permissions Error on Application Discovery and Load Balancer Service Application

Recently when I was reviewing the permissions on my service accounts, I received the below error message after clickign the Permissions ribbon button for my Application Discovery and Load Balancer Service Application.

Digging through the trace log I was able to capture the stack trace which was slightly more helpful:
System.ArgumentException: Exception of type 'System.ArgumentException' was thrown. Parameter name: claim at Microsoft.SharePoint.Administration.Claims.SPSystemClaimProvider.GetFarmClaimDisplayValue(SPClaim claim) at Microsoft.SharePoint.Administration.SPAce`1.get_DisplayName() at Microsoft.SharePoint.Administration.AccessControl.SPAclAccessRule`1..ctor(SPAce`1 ace) at Microsoft.SharePoint.Administration.AccessControl.SPAclSecurity`1.d__c.MoveNext() at Microsoft.SharePoint.Administration.AccessControl.SPObjectSecurity.d__0.MoveNext() at Microsoft.SharePoint.WebControls.AclEditor.OnPreRender(EventArgs e) at System.Web.UI.Control.PreRenderRecursiveInternal() at Sys...
Based on this I was able to assume that one of the claims in the ACL for the service app was invalid, the challenge is to find out which one.


How to troubleshoot the issue


First, we have to get a list of the permissions for the service application. To do this, we'll need to query the config database. Disclaimer: Querying the database is something you should only do for troubleshooting purposes. ALWAYS use the (nolock) query hint and NEVER update a table


Copy the results into a text editor (I prefer Notepad++) and you'll be able to see each entry. In the ACL for a service application you may see two different types of entries; remote farms and service accounts.
A service account entry will look like this:
ace identityName="i:0#.w|domain\serviceaccount" displayName="serviceaccount" sid="" binaryIdType="1" binaryId="aTowKS53fHMtMS01LTIxLTEzMDg3MDU0MzctMTc3OTk1JUDOwOC0xTMgyNzM5MzA1LTMwMTc0NzQ=" allowRights="18446744073709551615" denyRights="0" />&lt

and a remote farm entry will look like this:
ace identityName="c:0%.c|system|57c9c598-d677-4abb-9682-54fa17a2d8ae" displayName="Remote Farm: 57c9c598-d677-4abb-9682-54fa17a2d8ae" sid="" binaryIdType="1" binaryId="YzowJS5jfHN5c3RlbXw3MWH5YzU5OC1kNjc3LTRhYmItOTY4Mi01NGZhMTdhMmQ4YWU=" allowRights="18446744073709551615" denyRights="0" /><

The key difference is in the identityName field.

After reviewing every single entry, I found one entry where the identityName was configured for a remote farm, yet the claim was for a service account and not a farm. My guess is that when someone on my team was configuring a new farm to consume this service, they accidentally used a service account instead of the SPFarm Id in our script. Oops!

This is what the entry looked like : ace identityName="c:0%.c|system|domain\serviceaccount" displayName="0%.c|system|domain\serviceaccount" sid="" binaryIdType="1" binaryId="YzowJS5jfHN5c3RlbXxsbVxzYXNwbWxwYXBwcGxfdHN0" allowRights="18446744073709551615" denyRights="0" /><

As you can see, the indentityName was structured like a remote farm, yet it contained a service app instead of a farm id.


How to fix the issue


Luckily the fix is a very simple powershell script. Since we added remote farm claim with a service account name, we just need to revoke the claim. Below is the script I used. You can simply put in your service account for the -ClaimValue field

$security = Get-SPTopologyServiceApplication | Get-SPServiceApplicationSecurity
$claimProvider = (Get-SPClaimProvider System).ClaimProvider
$principal = New-SPClaimsPrincipal -ClaimType "http://schemas.microsoft.com/sharepoint/2009/08/claims/farmid" -ClaimProvider $claimProvider -ClaimValue domain\serviceaccount
Revoke-SPObjectSecurity -Identity $security -Principal $principal

After I did this, I was able to see the permissions from central admin again.