Deploying resource files across a farm

I’ve been looking a little bit more on finding the easiest way to deploy resource files (.RESX) in the App_GlobalResources directory and I found various solutions that includes Features and Timer Jobs to synchronize these resource files.  Mikhail Dikov has a very good article explaining different process and types of resources.


However, I find that having features and timer jobs that copy files aren’t very elegant and also impractical when you want to add a farm server, when you extend a web application (new zone), etc.  While we could still develop something around it, I couldn’t find an elegant solution.


In fact, I think the only elegant solution would be for the WSP to deploy App_GlobalResources correctly and that’d be the end of it.  Note that you can use WSP to deploy resources in the 12 hive for all Web Applications and also “feature resources”.  Since I didn’t find a way to have that elegant solution, I reverted to asking myself “How did we do it with out of the box Publishing Sites?”.  The answer is simple, all resources in the …12\Resources directory are ALL copied in the App_GlobalResources of each web application during the standard Web Application creation (or extension) process.


This means that simply copying your Resource files in the 12\Resources directory will automatically deploy your resources when you create a web application, extend a web application, and even when you add a server in the farm.  Now the next question was “How do I update my resource files?”.  First of all, you will need to upgrade your WSP with your updated Resource files.  This will copy the RESX in the 12\Resources directory again.  After this, you will need to run “STSADM -o CopyAppBinContent” on all servers in the farm.


Now, there are 2 things that I find “less elegant” in this solution, one that is fixable, one that depends on your architecture:

  1. First of all, running that command on all servers isn’t fun for your automated deployment scripts since it would require either doing it manually on all servers or use a remote execution (like PSEXEC) process on all servers.  You would either use a CMD with a list of servers (not very dynamic), or create a command that lists all farm servers.  I’ll outline a solution a bit further.
  2. For the second, it depends how you want your architecture.  If you only have one portal, it’s fine to deploy it centrally.  If you have more than one, it will depend if you “mind” if all resources are available on all web applications.  However, in an Intranet scenario, it’s probably very acceptable.  If you are an ISP, I would stay away from this solution altogether and you will unfortunately have to resort to a less interesting scenario.


What I ended up creating for executing remotely the STSADM command on all servers was of course another STSADM :)!  I’ll soon create a post with the outline of this STSADM (and add the link here).  For now, the extension does the following:

  1. Accepts a parameter with quotes that includes the command that needs to be executed on all servers.  I also noticed that you cannot have a parameters that includes “-o” between quotes so I might modify my command so that it accepts an input file that lists the command to be executed on all servers (just to be safe I can catch all parameters).
  2. Retrieves the list of servers (of which the server role isn’t “Invalid” like the database server)
  3. For each server, I add an entry in the Central Administration’s “Administrator Tasks” list that describes the command (title) with the server name in it, and in the description, I add the command to execute
  4. For each server, I create a Timer Job with a schedule of SPOneTimeSchedule.  That will essentially execute the job now on each servers and they will automatically delete themselves.
  5. On each server, when the Timer Job kicks in, it goes back in the Central Administration, reads the Administrator Tasks and find the entry with the “local” server name in order to read the Description column and execute the command defined there

The original command also check that there isn’t a timer job and task with the same name prior to executing.  Last, if the command doesn’t execute successfully on a server, the task isn’t deleted and stays on the Administrator Tasks list that shows on the first page of the Central Administration.  This allows to have a visual knowledge that the task didn’t execute correctly.





Comments (3)

  1. На codeplex lawrenceliu размещено отличное решение по аутентификации на Sharepoint-сайтах с использованием…

  2. На codeplex Lawrence Liu размещено отличное решение по аутентификации на Sharepoint-сайтах с использованием…