Adding CRM Users from Trusted Domain

When CRM is installed, the following Active Directory groups are created with group scope set as GLOBAL in the same Domain for which the CRM was installed:

  • PrivReportingGroup
  • PrivUserGroup
  • ReportingGroup
  • SqlAccessGroup

In case the client is having users from 2 different domains in which one domain trusts the other, you will need to change the AD groups scope above to LOCAL instead of GLOBAL to allow creation of users in CRM from the trusted domain, in case you didn’t do that, you will face an unexpected server error when saving the CRM user account although the user information will be retrieved normally from the trusted domain.


WCF REST Services Troubleshooting Techniques

You can find below several techniques to troubleshoot issues happening with WCF services Failed/Bad requests:

  • IIS Logs: This generates log files that can be viewed using notepad

    • Open IIS Manager and navigate to the level you want to manage. For information about opening IIS Manager, see Open IIS Manager (IIS 7). For information about navigating to locations in the UI, see Navigation in IIS Manager (IIS 7).
    • In Features View, double-click Logging.
    • On the Logging page, in the Actions pane, click Enable to enable logging or click Disable to disable logging.
  • Failed Request Traces: This generates xml files

    • Open IIS Manager and navigate to the level you want to manage. For information about opening IIS Manager, see Open IIS Manager (IIS 7). For information about navigating to locations in the UI, see Navigation in IIS Manager (IIS 7).
    • In the Connections pane, click Sites.
    • In Features View, select the site for which you want to enable trace logging.
    • In the Actions pane, under Configure, click Failed Request Tracing.
    • In the Edit Web Site Failed Request Tracing Settings dialog box, select Enable to enable logging for this site.
    • In the Directory text box, type the path where you want to store the log files or click the browse button () to find a location on the computer. The default is %SystemDrive%\inetpub\logs\FailedReqLogFiles.
  • Fiddler: This tool is a vital one to view the request and response coming from the web service, you can also generate requests from within that tool.
  • WCF Tracing: Steps for traces data collection:
    1. Create a folder c:\temp
    2. Give full control to everyone
    3. Apply the diagnostics elements in configuration file that can be found below 

      This part will go inside <system.servicemodel> section 


      <messageLogging logEntireMessage=”true” logMalformedMessages=”true”

      logMessagesAtServiceLevel=”true” logMessagesAtTransportLevel=”true”

      maxSizeOfMessageToLog=”26214445″ />


      While this part will be at the end of the web.config file right before the configuration close tag.

      <system.diagnostics><trace autoflush=”true” />


      <source name=”System.ServiceModel.MessageLogging” switchValue=”Verbose,ActivityTracing”>


      <add type=”System.Diagnostics.DefaultTraceListener” name=”Default”>

      <filter type=”” />


      <add name=”ServiceModelMessageLoggingListener”>

      <filter type=”” />




      <source name=”System.ServiceModel” switchValue=”Verbose,ActivityTracing”



      <add type=”System.Diagnostics.DefaultTraceListener” name=”Default”>

      <filter type=”” />


      <add name=”ServiceModelTraceListener”>

      <filter type=”” />






      <add initializeData=”c:\temp\Service.messages.svclog”

      type=”System.Diagnostics.XmlWriterTraceListener, System, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089″

      name=”ServiceModelMessageLoggingListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, ProcessId, ThreadId, Callstack”>

      <filter type=”” />


      <add initializeData=”c:\temp\Service.traces.svclog”

      type=”System.Diagnostics.XmlWriterTraceListener, System, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089″

      name=”ServiceModelTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, ProcessId, ThreadId, Callstack”>

      <filter type=”” />




The most important thing to know is that in case you are getting 400 bad request for a REST service without clear error details and you are sure that the request didn’t get into your WCF method, then there is a high probability that the JSON object passed is having a problem in one of its properties that didn’t allow it to be parsed into .Net classes, I advise to specially double check the DateTime fields and ensure they are having a proper value when passed as part of the JSON object.

Server-side Synchronization in Dynamics 365

This is a brief summary for a while paper released Aug 2016 explaining how the server side synchronization works:

Server-side synchronization, also known as Server-Side sync or Exchange sync, is a server-side process for synchronizing appointments, contacts, tasks (ACTs), and email messages between Exchange Server and Microsoft Dynamics CRM Server. Server-side sync runs as part of the Asynchronous Processing Service.

Since the server-side sync component is hosted on the server running the Microsoft Dynamics CRM Asynchronous Service server role, it brings some advantages. First, the current Microsoft Dynamics CRM Asynchronous Service already loads the full set of organization metadata-caches. If server-side sync ran in another process, these large caches would have to be loaded a second time resulting in sub-optimal memory use. Second, it gives server-side sync access to the full set of organization metadata-caches that’s loaded in the Microsoft Dynamics CRM Asynchronous Service process.

Unlike Outlook synchronization, which requires CRM for Outlook to support synchronization, server-side sync can support synchronizing activities between Dynamics CRM and Exchange without running CRM for Outlook.

Typically, the server-side sync loading mechanism makes sure each mailbox that needs processing is serviced within 15 minutes.

When queuing mailboxes to process, server-side sync provides some configurable values located in the DeploymentProperties table of the configuration database that you can adjust by using Windows PowerShell to customize the queueing capacity in your environment. Notice that this configuration is available for on-premises deployments of Dynamics CRM only.

The default in memory queue settings are configured for mid-size organizations, which typically work best for organizations that have between 3,000-5,000 users.

Queue performance depends on the number of users and item workloads across the servers running the Asynchronous Processing role. You can use Windows PowerShell to either increase or decrease the settings, depending on the number of users and email or activity synchronization experienced.

When the Asynchronous Processing Service server role is deployed on more than one Windows Server there is no affinity between mailboxes and the servers on which they will be processed.

Dynamics CRM Server Setup facilitates administrators by allowing the selection the of Email Integration Service capabilities as a separate server role. This can help improve performance and scalability by isolating email integration specific operations for on-premises deployments during installation. The server role can be isolated by selecting only Email Integration Service.

One of the first steps to successfully run server-side sync involves configuring an email server profile. In Dynamics CRM on-premises, the email server profile facilitates administrators to specify configuration settings such as server types, server locations, and authentication details.

You can find in this MSDN article the complete steps for configuring Server side synchronization to connect Dynamics 365 on-premise with Exchange Server on-premise.



CRM 2016 ADFS Configuration for Internal Access

We had a unique Dynamics CRM implementation in which we had a requirement for enabling external Active directory users connected to CRM environment using vpn tunnel to authenticate with CRM without exposing CRM over the internet, this lead us to utilize the enabling claims based authentication for internal access CRM configuration with configuring IFD, below are some hints and hard lesson learned to achieve such authentication requirements:

  • Host the CRM web services on a port other than 443 as per Microsoft guidelines for claims based authentication for internal access, port 443 can be used for the web application access normally.
  • We managed hosting the CRM web services on port 444 while the CRM web client was hosted on different port which is 443 and CRM worked normally.
  • Ensure that any client that is trying to access CRM web interface or use CRM web services is having access to the CRM front end servers and to the ADFS servers as well due to the fact that the authentication is happening through a re-direction to the ADFS landing page.
  • For any custom web services or SSIS sychronization packages integrating with Dyanmics CRM, make sure they have the appropriate access to the ADFS servers.
  • In plugin registration tool you may need to use a special format for the username depending on the UPN claim format which is configured in ADFS when adding the CRM as a relaying party.
  • The default session timeout period is very small and controlled from ADFS, it is advised to increase the token lifetime for a better user experience as when the session times out, the client must close all opened browser sessions and open it again to re-authenticate.

I will try to update this post with any new hints that I missed.

Dynamics CRM Form Ajax Update

Microsoft released since CRM 2013 new client APIs, some of them are the and methods.

The refresh method Asynchronously refreshes and optionally saves all the data of the form without reloading the page as per MSDN and has the following signature:, errorCallback);

This function has a better user experience than the old form refresh method which reloads the whole form and fires the on-load events that might not be needed.

Also, in the case we wanted to perform asynchronous save of data we can utilize the save function which saves the record asynchronously with the option to set callback functions to be executed after the save operation is completed as per MSDN and has the following signature:, errorCallback)

Dyanmics CRM Data Migration using KingswaySoft Tips

I will divide the tips to CRM/SQL & Kingswaysoft parts:

CRM & SQL Optimizations:

  • Disable CRM plugins, audit and/or workflows in your initial load if you can, as they all have certain impact to your data integration or migration performance.
  • Ensure that there are no real-time workflow or synchronous plugins when doing any data migrations into CRM as this will affect badly the migration speed, try to convert them to Asynchronous workflows & plugins.
  • CRM plugins or workflows usually have a certain degree of performance impact on your CRM data integration. Poorly designed CRM plugins or workflows could severely affect your integration performance. Try to compare the performance before and after enabling them, in some cases you might have to revisit their design to make sure that best practices are applied in your custom code.
  • Ensure that the CRM maintenance job are not running on the same time you are running the migration packages specially the re-index job, you can use CRMJobEditor tool to modify the schedule for these system jobs, in general its advised to have them running out of the core business hours.
  • Make sure that “Reindex All” CRM maintenance job is configured and running properly, or otherwise create DB maintenance jobs to REBUILD or REORGANIZE indexes for your CRM database on a regular basis.
  • Monitor your database server to see if there are any excessive db locks.
  • Schedule the jobs to run from within SQL server agent as described here, Set the SSIS Package ProtectionLevel property to EncryptSensitiveWithPassword in case the connection passwords are stored locally and not passed as a parameter to the package as described here. It is advised to create package configurations file as described here. Make sure that the package are being executed using 32 bit run-time mode to allow BDD to run, you will need to do that as well in Visual studio for debugging purposes when setting the TargetServerVersion to SQL Server 2014 as described here.
  • Two components that impact the speed of your data migration are network latency and concurrency. Latency is the time that it takes for an information packet to travel through a network from its source to destination. Concurrency refers to processes that are executing simultaneously, working together to achieve the end result.

Kingswaysoft Optimizations:

  • To use CRM Bulk Data Load API, you just need to enter a batch size greater than 1 in the CRM destination component.
  • Avoid using the Duplicate Detection option if you can.
  • Make sure that you are always passing in a valid lookup reference for all lookup field, avoid using “Remove Unresolvable References” option. The option is designed for special scenario, and it does involve checking each lookup field value which could be very expensive some time.
  • Upsert action (except when the Alternate Key matching option is used) involves an extra service call which queries the target system by checking the existence of the incoming record, which has a cost associated in terms of its impact on your overall integration performance. If you have a way to do a straight Update or Create, it would typically offer you a better performance.
  • For CRM On-premise, you would typically use 5 BDD branches in each data flow with each CRM destination component using a batch size of 200 or 250. You can have multiple data flow tasks in the same SSIS package that write to CRM server simultaneously.
  • If you have a multi-node cluster for your on-premise deployment, you can use CRM connection manager’s CrmServerUrl property in its ConnectionString to specifically target a particular node within the cluster. Doing so, you can have multiple connection managers in the same package or project that target different nodes of the cluster, and you write to multiple destination components of the same configuration with different connection managers, so that you are technically writing to multiple cluster nodes in parallel, which provides some additional performance improvement on top of BDD.


Best Practices when Writing Dynamics CRM Plugins

  • For improved performance, Microsoft Dynamics 365 caches plug-in instances. The plug-in’s Execute method should be written to be stateless because the constructor is not called for every invocation of the plug-in. Also, multiple system threads could execute the plug-in at the same time. All per invocation state information is stored in the context, so you should not use global variables or attempt to store any data in member variables for use during the next plug-in invocation unless that data was obtained from the configuration parameter provided to the constructor. Changes to a plug-ins registration will cause the plug-in to be re-initialized.
  • Its a best practice to check for the target entity name and message name at the beginning of the plugin execute message to avoid running the plugin unintentionally.
  • When you want to update fields on a record, it’s good practice to create a new entity or early bound type of the record and only add the fields you want to update. By only updating the fields you are changing you reduce triggering other plugins running needlessly.
  • When retrieving an entity using SDK, make sure you are instantiating a new object (not just assigning a reference) and assigning it to the returned object from the retrieve SDK message for better performance.
  • Do not update the retrieve Target entity because it will update all fields included in the target entity.  The primary entity targeted by a platform create or update event should not be updated within the context of plug-in execution.  Developers should instead design their plug-in to execute in a stage prior to the core operation and manipulate the target object in the InputParameters.
  • The common method to avoid a recurring plugin is to check if  a plugins depth > 1.  This would stop the plugin from being run if was triggered from any other plugin.  The plugin would only run if triggered from the CRM form. This can resolve the problem of plugins firing more than once but it stops plugins being triggered from other plugins, this might not be the functionality you require.
  • Never save the Organization service in the CRM execution context in a static variable as this will lead to lots of OpenDataReader errors thrown by the CRM platform since we must keep the plugin/custom step stateless as per MSDN plugin best practices.
  • You need to have proper exception handling for your developed plugins to better troubleshoot any unexpected behaviors, for synchronous plug-ins, you can optionally display a custom error message in the error dialog of the web application by having your plug-in throw an InvalidPluginExecutionException exception with the custom message string as the exception Message property value, before throw the exception its good practice to log the exception error, origin and any other kind of helpful logging information in your custom logging location.
  • If you throw InvalidPluginExecutionException and do not provide a custom message, a generic default message is displayed in the error dialog. It is recommended that plug-ins only pass an InvalidPluginExecutionException back to the platform.
  • Plug-ins should exist with others in a project and not be isolated. An example of an exception to this recommendation would be if a plug-in needed to be selectively deployed to an environment, whereas the others are not to be deployed.  There are two areas of impact for this observed pattern of a single plug-in per assembly:
    1. Performance – each plug-in assembly has a lifecycle that is managed by the CRM deployment, which includes loading, caching, and unloading.  Having more than one assembly containing plug-ins causes more work to be done on the server and could affect the time in which it takes for a plug-in to execute.
    2. Maintainability – having more than one project in Visual Studio can make it more difficult to manage.  It also adds additional steps when packaging a solution and managing deployments.

    Consider merging isolated plug-ins into a single Visual Studio project and assembly.

  • Use NOLOCK hint for Microsoft Dynamics CRM QueryExpression and FetchXml requests for CRM entities that are not having frequent changes like configuration for better query execution performance.
  • Update plug-in target entity only contains the updated attributes. However, often the plug-in will require information from other attributes as well. Instead of issuing a retrieve query, the best practice is to push the required data in an image instead by using the Pre-Image.
  • Its advised for logging purposes to use ready made libraries like log4net or Nlog as they are optimized libraries for handling concurrency and high workload scenarios with many logging options and providers.
  • Avoid usage of batch request types like ExecuteMutipleRequest in plug-ins and workflow activities. Use these batch messages where code is being executed outside of the platform execution pipeline, such as integration scenarios where network latency would likely reduce the throughput and increase the duration of larger, bulk operations.
  • ExecuteMultiple and ExecuteTransaction messages are considered batch request messages. Their purpose is to minimize round trips between client and server over high-latency connections. Plug-ins either execute directly within the application process or in close proximity when sandbox-isolated, meaning latency is rarely an issue. Plug-in code should be very focused operations that execute quickly and minimize blocking to avoid exceeding timeout thresholds and ensure a responsive system for synchronous scenarios.


I will try to keep this post updated with further tips and tricks as soon as I know them.