The below tip is from Senior Microsoft Premier Field Engineer Frank Plawetzki where he quickly and easily remediated an issue with an Exchange 2013 App Pool which was preventing Outlook clients from connecting.
I named this article part 2 since I wrote an article covering similar symptoms but is important to note that both articles are not a series and cover issues which are not related and are independent to each other.
Symptoms
Recently at my customer, several Outlook clients hung on startup and stayed at “connecting” in the connection status windows for several minutes.
After some time, the clients switched to connected. The time until the clients got connected varied between the clients. Clients were using Outlook Anywhere in that environment.
Troubleshooting
One possibility for getting to the bottom of this would certainly have been checking the IIS and Exchange log files related to Outlook Anywhere for entries regarding the affected users. This usually means either pointing a repro clients to a certain CAS/MBX Exchange server chain, in order to avoid querying a whole bunch of Exchange servers or collecting the logs from many servers.
Since I could repro the issue with my own Outlook client in the customer environment, I decided to do a Fiddler trace instead. Fiddler is a tracing tool for inspecting HTTP traffic and is also capable of inspecting encrypted HTTPS traffic, if you set the options correctly and insert the fraud fiddler certificate in a man-in-the-middle fashion. Fiddler can be found here: http://www.telerik.com/fiddler
In the fiddler trace, I could see that the outlook client gets a 500 HTTP error back while trying to access the Outlook Anywhere endpoint URL. From the X-FEServer field in Fiddler, I could see which Exchange CAS my client was using and which server name that was throwing the 500 error.
Checking the application events log on this server, it was full of those errors, being thrown every couple of seconds:
Log Name: Application
Source: MSExchange Front End HTTP Proxy
Date: 20.09.2016 09:42:00
Event ID: 1003
Task Category: Core
Level: Error
Keywords: Classic
User: N/A
Computer: E2013Server
Description:
[RpcHttp] An internal server error occurred. The unhandled exception was: System.InvalidCastException: Unable to cast object of type ‘RequestTimeoutEntry’ to type ‘System.Byte[]’.
at System.Net.Connection..ctor(ConnectionGroup connectionGroup)
at System.Net.ConnectionGroup.FindConnection(HttpWebRequest request, String connName, Boolean& forcedsubmit)
at System.Net.ServicePoint.SubmitRequest(HttpWebRequest request, String connName)
at System.Net.HttpWebRequest.SubmitRequest(ServicePoint servicePoint)
at System.Net.HttpWebRequest.BeginGetRequestStream(AsyncCallback callback, Object state)
at Microsoft.Exchange.HttpProxy.ProxyRequestHandler.<BeginProxyRequest>b__15()
at Microsoft.Exchange.Common.IL.ILUtil.DoTryFilterCatch(TryDelegate tryDelegate, FilterDelegate filterDelegate, CatchDelegate catchDelegate)
This particular front end app pool was suffering from a first chance exception, which means the app pool crashed in a fashion where it does not work correctly anymore but would not crash completely and be terminated.
Unfortunately, this isn’t a condition managed availability is able to catch, so an Exchange administrator has to monitor the event logs and react on those errors.
Resolution
The resolution for this issue is quite simple: You start IIS manager on the affected Exchange server and recycle the MSExchangeRpcProxyFrontEndAppPool. After a couple of seconds the event log will stop logging those 1003 errors and the clients being referred to the affected CAS server by the hardware load balancer will be able to connect quickly without issues again.
Posted by MSPFE editor Rhoderick Milne from 36,000 feet. Whilst glad for Internet connectivity on a plane he rues the day when maniacs can make phone calls…