1. Gather Requirements
I always begin with identifying the needs of the users. Those include both configurations as well as information architecture. When I work with the users to define the information architecture, I prefer tools that are simple and efficient at gathering this data. For any information that is tabular in nature, I use Excel, while either Excel or an XML editor work well for hierarchical information. Excel is great because most business users have a basic understanding of how to fill in the sheets and use drop-down choices.The information architecture components I gather are:
- Taxonomy (Managed Metadata)
- Site Columns
- Content Types
- Lists & Libraries
- Views
I work with the users to fill in one of these files for each term group.

- What should the name of the column be?
- How are the columns grouped?
- Is the column required?
- What type of data will the column hold?
- Is it a multiple choice column, and if yes, what are the choices for the column?
To reduce errors, I present the Required, Type and Multivalue columns as drop-downs for the user. I calculate the InternalName column by removing all spaces and special characters. I then add the GUID for each column. By providing the GUID, I make sure that for each deployment, the column can be referenced by any part of the site using the same ID.

Next, I collect the Content Type data. Similar to the columns, I use an Excel document to capture the name, group, Field names, and whether they are required.
However, because it’s hierarchical structure, I prefer to convert the content types to XML to simplify the code in PowerShell.
After the content types are defined, I gather the lists, libraries, and views specifications. I’m capturing only specific settings about the lists, such as whether versioning and check-in/check-out should be enabled. However, it’s relatively easy to use PowerShell to modify additional information architecture settings. Just make sure that you capture the information in Excel and alter your import code to apply these changes.
Capturing information about the configurations is a bit more involved as the available options for the configurations vary significantly.
2. Build Master Site
Now that I’ve gathered all the necessary information from the users, I begin to build my master site. As the first step, I create the empty site and then apply to it all the configurations using specific PowerShell scripts that will match my requirements. The settings will include- Turn features on/off
- Set the global and site navigation parameters
- Modify regional settings
- Configure search settings
- Taxonomy
- Site columns and Taxonomy Columns
- Content Types
- Subsites
- Lists and libraries
- Views
3. Build Template Sites
I can now take these templates and deploy them over and over to create new sites with the same configurations and content. Typically, I’ll use the template to create the QA and production environments. If I haven’t introduced any crazy customizations, then I could even apply the same templates in an on-premises environment. Finally, I deploy the content to the sites.NOTE: It is possible to create the provisioning templates manually, as they are simple XML files. However, I find it a bit risky as any errors could cause the entire template deployment process to fail.
Benefits of using this approach
I’m sure many readers can weigh into this article and provide their perspective and experiences on deploying sites. That’s great! By no means am I trying to imply that this method is better than others – it just works well for me. A few benefits that I’ve seen using this approach are:- Gathering user requirements is easy. Depending on the comfort level of my users, they can even provide a lot of the information without my direct involvement.
- It’s very easy to make changes to the information architecture and redeploy a new master site
- By building the master site and using its template to create the QA and production environment, I’m in essence, testing the deployment process
No comments:
Post a Comment