7/25/13

The Great Migration to Exchange 2013

We upgraded to Exchange 2013 this week from Exchange 2007.  We have wanted to upgrade almost since the day we installed Exchange 2007. Not because it was bad, but because we installed it only a few months before Exchange 2010 was RTM (long story for another blog post). We are also hoping that the greatly enhanced Outlook Web App will help with our volunteers and BYOD users.

2003 to 2007 was a big project and I did it all last time. There were many unforeseen issues that took a lot of time to resolve. It was not something I wanted to repeat without help. This time I enlisted the services of a friend and professional who was experienced with this kind of thing. Ed Buford is someone I have know for a while and works for Pinnacle of Indiana.

I did what I could to cut down on the consulting costs. Upgraded our servers to VMware 5, downloaded and installed Server 2012 Datacenter, updated said server, downloaded Exchange 2012 CU2. Ed helped with the more technical stuff like prepping Active Directory, checking the configuration on Exchange 2007, Installing and configuring Exchange 2013 using best practices (better than mine since Exchange 2013 is still real new some things were kind of buggy).

Then came the migration. We had planned to do this on a Sunday afternoon because for us that is the lightest day of the week and would have the least impact on our staff. Ed and I figured it would take 4-6 hours to transfer our 260 mailboxes. We were wrong. After transferring just my mailbox as a test (which was about 4 Gb) we saw a problem. It was only transferring at about 11 Mbps and took about an hour. Exchange 2007 server was only pushing a few percent on CPU and network but the Exchange 2013 server was almost maxing out the two vCPU’s. I decided to move this to a different host and try again, this time with 4 vCPU’s. This batch we ran about 10 accounts and found that it would only transfer about 4 to 6 at a time. All 4 CPU’s were maxed out. After that batch I shut down and added two more (this host only had 8 cores). This next batch of 50 mailboxes went a little faster and would transfer 6 to 8 boxes at a time. We figure this was as good as it would get and queued up a few more batches and called it a night.

In the morning we saw that the boxes that had moved were accessible by OWA and ActiveSync. Boxes that hadn’t moved yet were only accessible by Outlook since we changed our autoconfigure settings and certificates. I Queued up the last batch but saw that we had a problem. They weren’t moving. After some exploring we found that restating the Exchange Transport service got things going again (it said it was up). We figured it was a fluke and continued with the migration. By the end of the day, all mailboxes were transferred, Activesync devices were syncing, Outlook was connecting, and Outlook Web App was working great.

The next day I discovered that at some point in the night/morning, we stopped getting outside email. Our hosted barracuda was saying that it had delivered a lot a messages but where did they go? We still had email routing through our old server so I checked there and saw that we had 1590 messages waiting to be delivered!! We restarted the Exchange Transport service on Exchange 2013 and watched as all of the messages were delivered in about 15 seconds. After this we moved inbound email over to the new server and thought we were done. Nope. This would continue to happen again and again over the next few days, but now, messages would spool on the Barracuda (thankful we use it for this reason!) and then deliver once we restarted the Transport service. Ed found that others on the web were experiencing this same issue outlined here.

After deleting the old receive connectors as people in the above TechNet thread suggested, and only having the default ones, it looks like the issues is gone. The problem is I NEED those connectors. So, I’m going to add them back one by one and see what happens.

So, in summary, these are the things I learned about migrating to Exchange 2013

  1. Communicate well to your users. I’m not sure if you can over communicate, but you want to get as close as you can. No matter what you do there will still be those saying “Oh, was that today?”. Since email would be down for some during and after the transition for some, we setup a blog they could go to for updates and directions.
  2. Plan a lot of time. More than you think.
  3. Make sure your new exchange server has plenty of processors (at least for the transition. You can drop a few after that.) More processors = faster migration.
  4. Be prepared for something to go wrong. In our case, we already had outside help queued up. If you’re doing this solo you should definitely do some more tests before the big “Moving Day”.
  5. Plan for issues with certificate and namespace if you changing those in any way with your migration (you probably will). Android devices seemed to have the most trouble with this since they all handle things a little differently depending on their OS version and device model. iOS devices were pretty predictable. Once we knew what change we had to make they were all the same.

I’ll try to update this blog in the coming weeks as we get used to Exchange 2013 and also note any issues and how we resolved them.

Issues

  • Users accessing our OWA site via HTTP are not being redirected to HTTPS. We have tried just about everything we can and it still won’t work. If you have figured this out, let me know! Please!

4/12/13

Simulcast 2.0

Back in 2010 I told you about how The Chapel had been using the latest HD video technology to be “one church in many locations”. We are still a mutli-site church and have grown from our initial 4 video campuses to 8 campuses by 2012 (5 with live video & 3 with “tape”). Our current video simulcast solution was working great for the 5 campuses that were on our fiber network but we had not moved the other 3 to live video for a number of reasons.

Some of the problems with our current solution:

  • Expensive – Joining another campus to our fiber network to enable them to have live video would require thousands of dollars of network equipment and thousands more to get the fiber into the building and terminated.
  • Flexibility – Singing a 3 year contract on fiber is about as fun as taking out a mortgage on a house (and hurts the pocket book about the same!). The Internet and network world is constantly changing so why get locked into something?
  • Scalability – To this date we had been lucky that all our locations are within the same Chicago metro area and could be serviced by AT&T’s Opt-e-Man product. But what happens if we want to cross the boarder into Wisconsin or downstate Illinois? Or another state or country? Our telco broker warned us that that could be an expensive problem in the future as it would most likely require an even more costly circuit.

I knew that I didn’t want to get caught off-guard when it came time for renewal in 2013 and be forced to resign. So I started working with our telco broker early in 2012 and I am glad I did! We heard pitches from several of the top vendors and to my surprise most were even more expensive than AT&T!

Throughout this time I had been talking with Chris Kehayias at Calvary Chapel Melbourne who introduced me to Zixi at the Church IT Round Table event they hosted in 2011. Zixi can best be described as a transport service specializing in video. Zixi ensures that my video gets from point A to point B without dropping a packet. Chris and others had been using it to deliver video in a point to multi-point church environment with amazing results. After getting a demo setup in our environment we were sold! The great thing about how Zixi works is that we really didn’t have to change our workflow, encoders/decoders or even bitrate. Zixi just “dropped in” seamlessly.

We have now moved most of our receive campuses to Zixi but still maintain a fiber connection between our two broadcast campuses, Grayslake and Libertyville. Our receive campuses each have their own 27/7 Mbps Comcast coax internet connection. So far this has worked great for us and is able to keep up with our two HD video feeds running at around 16 Mbps. Our send site uses a 40/40 Mbps Comcast fiber Internet connection for the upload.

We had to make some network changes since we were going from a fiber point-to-point system to a VPN system. For that we are using SonicWall TZ-200 & TZ-205’s at our receive site and an NSA-240 at our send site. The NSA-240 seems to handle the video just fine but is struggling to keep up with other traffic so we are in the process of upgrading. The TZ-200 & 205’s are doing just fine at the receive sites though.

Even with these changes we should see a 43% yearly reduction in our simulcast and network cost! We are now able to get all our campuses live video for less than we were paying for just 5 in the previous model. We are also to setup a video campus anywhere we have a decent internet connection.

Earlier this year at the Spring National Church IT Roundtable even I gave a “Ten Talk” about Zixi and most of what I detailed above. You can check out the video here on YouTube.

Also, this is a link to the presentation slides.

If you have any comments or questions, leave them below or catch me on Twitter