Setup SharePoint 2013 Single Server Environment – Part 1: Environment Details

Intro

This series describes the details and required steps for setting up a SharePoint 2013 Single Server Environment. This is typically used by developers in order to write custom solutions without interfering with one another. This type of environment is also is good for evaluation, training and demonstration purposes.
The series contains 6 parts:
  1. Environment Details
  2. VM setup
  3. Windows Server 2012 setup
  4. SQL Server setup
  5. SharePoint Installation and Initial Configuration
  6. Post-Installation Steps

Environment Details

This article contains the environment details, including hardware requirements, minimum recommended services, minimum software, an required service accounts.

Hardware Requirements

  • Processor: 4 cores, 64-bit
  • RAM: 10GB
  • Hard-Drive Space: 100GB

Minimum Recommended Services for Development Environment

  • App Management Service Application
  • Central Administration Website
  • Claims to Windows Token Service (C2WTW)
  • Distributed Cache Service
  • Microsoft SharePoint Foundation 2013 Site and Subscription Setting Service
  • Secure Store Service
  • User Profile Service Application (SP 2013 only)

Minimum Software

Optional Extra Software

  • Microsoft SharePoint Designer 2013
  • Google Chrome
  • Firefox
  • ULSViewer

Service Accounts

Account Purpose Requirements
SQL Service Account

sp_sql

Used to run SQL Server services:

  • MSSQLSERVER
  • SQLSERVERAGENT
  • Use either a Local System account or a domain user account.
SharePoint Setup Account

sp_admin

Used to run the following:

  • Setup
  • SharePoint Products Configuration Wizard
  • Domain user account.
  • Member of the Administrators group on each server on which Setup is run.
  • SQL Server login on the computer that runs SQL Server.
  • Member of the following SQL Server roles:
    securityadmin
    dbcreator
 Server farm account or database access account

sp_farm

Used to perform the following tasks:

  • Configure and manage the server farm.
  • Act as the application pool identity for the SharePoint Central Administration Web site.
  • Run the Microsoft SharePoint Foundation Workflow Timer Service.
  • Domain user account.
  • Additional permissions are automatically granted for the server farm account on Web servers and application servers that are joined to a server farm.
  • The server farm account is automatically added as a SQL Server login on the computer that runs SQL Server.
  • The account is added to the following SQL Server security roles:
    • dbcreator
    • securityadmin
    • db_owner for all SharePoint databases in the server farm

 

VM Setup >>

Reference

Install and Configure SharePoint 2013
Install SharePoint 2013 on a single server with SQL Server

Advertisements

Tools and techniques for Agile teams

The following list is meant to be a compilation of practices used by teams following an agile development process. This is not an absolute list, as I intend to update it periodically, but it should represent the most common practices being adopted and exercised in the field.

Requirements

  • Product and Sprint Backlogs
  • Kanban Boards
  • User Stories
  • DoD (Definition of Done)
  • Acceptance Criteria

Estimation

  • Sprint Planning
  • Story Points
  • Planning Poker

Feedback

  • Burnup and burndown charts
  • Daily Stand-up
  • Sprint Review
  • Retrospective Analysis
  • Automated Tests
  • Continuous Integration
  • Code Reviews

Quality & Productivity

  • Source Control
  • Pair Programming
  • Automated Testing
  • Test-Driven Development
  • Continuous Integration
  • Design Patterns
  • Code Refactoring
  • Product Increment

Project Roles

  • Scrum Master
  • Product Owner
  • Cross-Functional team

Template for Retrospective Analysis used by Agile Teams

Intro

The following is one of the principles described in the Agile Manifesto:

Continuous attention to technical excellence and good design enhances agility.

One technique to materialize this principle is to conduct a retrospective analysis after each iteration. This allows the team to identify issues and reflect on how to become more effective, then fine tune the behavior accordingly and improve the process. The goal is always to become better at what we do and deliver the best possible results, while keeping all team members engaged.

I will share a very simple template that could be used to gather data about a previous iteration of work from all team members. This could serve as a guide for a productive discussion.

Retrospective Analysis Template

The following template can be used to collect some information from team members after an iteration of work:

What went well?
In other words, what took place that we want to recognize and acknowledge as having gone well?
What didn’t go so well?
What took place that didn’t go as we would have liked?
What did I learn?
We want to think about what we’ve learned about how we worked together. It’s less about the content and more about the process.
What still puzzles me?
This question allows us to capture things that happened but leave us feeling unclear or puzzled.

Summary

Constant improvement is a key aspect of any Agile project. Conducting retrospective analysis allows teams to gather feedback after each iterations and  identify what areas need attention. There are many recommended approaches and frameworks for conducting a successful retrospective analysis session, and they all focus on identifying three things:

  • what went good
  • what went bad
  • what actions we should take to improve the process

This articles provides a specific template that could be used to request information from team members after each iteration, and then be used as support for in-person session.

Reference

Agile Retrospective by Margaret Rouse

Agile Manifesto

Agile Retrospective Wiki

 

 

 

Troubleshooting SOAP exceptions from ASP.NET service

Intro

I’ll provide some details about an integration issue I had to resolve, to allow a separate team consuming one of the web services my team created, and be able to consistently handle known custom exceptions.

Issue

Here is a brief description of the scenario:

  • Team A created an ASP.NET web service to enable interaction with the corporate Yammer network.
  • Team B leverages the service to provide social capabilities to some LOB applications
  • In several scenarios, the service raises custom exceptions. ie: user not found, user is inactive, user cannot be null, etc.
  • Team B handles those scenarios by catching the custom exceptions raised by the service and implementing fallback logic.

  • Team B reported that under the same conditions (same code base and same arguments) they were able to successfully capture the custom exceptions in the Integration Environment, but they were unable to do so in Test and Staging environments.
  • The service logic implements something like this:
[WebMethod]
public string SomeMethod(string arg1)
{

   Throw New Exception("Something bad happened");
}
  • If the service is consumed from the Integration Environment the result look like this:
<soap:Fault>
   <faultcode>soap:Server</faultcode>
   <faultstring>System.Web.Services.Protocols.SoapException: Server was 
unable to process request. ---&gt; System.Exception: Something bad 
happened at AYS17Sept2002.Service1.CallFault() in 
c:\inetpub\wwwroot\AYS17Sept2002\Service1.asmx.vb:line 49
   --- End of inner exception stack trace ---</faultstring>
   <detail />
</soap:Fault>
  • If the service is consumed from the TST or STG environments, the result look like this:
<soap:Fault>
    <faultcode>soap:Server</faultcode>    
    <faultstring>Server was unable to process request. --&gt; Something 
bad happened</faultstring>
    <detail />
</soap:Fault>
  • Team B was unable to catch the expected custom exception in TST and STG environments, hence the code failed to implement the fallback logic.

Cause

By looking at the results, and more specifically at the error message, it is clear that the same custom exception is being raised from the service, but it is not being delivered consistently to the client. After some research I found the MSDN article that explains this behavior, and it has to do with the way ASP.NET framework handles the SOAP exceptions.

Whenever the code within a web service raises an exception, ASP.NET catches that exception and transforms it into a SOAP Fault. Depending on the /configuration/system.web/customErrors/@mode setting in the web.config, the SOAP exception will contain more or less information:

  • On setting tells ASP.NET not to add the stack trace information to any Faults.
  • RemoteOnly causes the extra information to show up for clients on the same server and hides the information for remote users.
  • Off setting tells ASP.NET to add extra information for all calls to the Web service.

This is the same behavior for ASP.NET web pages.

Resolution

So after verifying this setting in all environments, I confirmed that <CustomErrors> was set to “Off” in Integration Environment, and “On” in all other environments, which makes sense from the deployment perspective.

With this in mind, the next step was to coordinate between the two teams on how to handle the custom exceptions consistently.

We assessed several options, each one with pros and cons:

  1. Modify the web service logic, adding additional details to the SOAP exception
  2. Instead of raising a custom exception from the web service, return a response code that could be interpreted and handled appropriately by the other application
  3. Create a separate web.config file only for the service, and set the <customErrors> mode to “Off”, and deploy to all environments

Reference

Using SOAP Faults

How to get the bytes from a file and convert them to Base64String in PowerShell

Introduction

In this post I’ll share a PowerShell script, that simply reads the bytes from a given file, converts those bytes to a Base64String, and then saves it to a text file.

A little background about why I needed this

I needed to troubleshoot a web service method and I wanted to use SOAP UI as the testing framework. The service method took a byte[] as a parameter and uploaded the corresponding file to a Yammer network.

I’ve been using SOAP UI for testing web services, but most of the times the requests were taking simple types, such as integers and strings. I learned that the way to pass a byte[] to an ASP.NET web service method is to use the Base64String representation, so my next task was to get the Base64String representation of any given file, so I could grab it as simple text and pass it to the XML test request. Rather than creating a console application, I thought that a PowerShell script would get it done faster.

Here is the script

#Get the bytes of a given file
$fileBytes = Get-Content <fileNameAndPath> -Encoding byte

#Convert byte array to base64 string
$fileBytesBase64 = [Convert]::ToBase64String($filebytes)

#Save Base64String to a text file
Add-Content outputFileBytesBase64.txt $fileBytesBase64

Reference

I was able to gather useful tips from these posts:

PowerShell Encoding and Decoding (Base64) – by Sean Metcalf

Efficient Base64 conversion in PowerShell – by mnaoumov

ASPX file opens as empty NotePad from Visual Studio

Visual Studio opens an empty NotePad when accessing a file within the solution. Hopefully you have a backup at hand.

Issue

I recently came across an issue in Visual Studio that had my brain spinning for a couple a days:

  • I was accessing a custom page through the browser and the result was a blank.
  • I imagined it was some kind of authentication issue or missing resources, and I used Fiddler to check the traffic, but nothing was found there that could give me a clue.
  • Things got a little weird when I tried to open the custom page in Visual Studio, and a blank NOTEPAD popped up. I went outside of Visual Studio and tried to open the ASPX file directly in NOTEPAD, but I got an empty file.

Cause

So after doing some online research, I found out that this issue was related to a missing file pointer at the OS level. This could’ve been caused by a sudden shut-down, which makes sense because I usually close the VM immediately rather than properly shutting down the system (lesson learned).

Resolution

Seems like there is no resolution for this issue, and the only alternative is to recreate the file.

Luckily I was using source control, so I just deleted the file locally and got latest version.

Reference

http://stackoverflow.com/questions/12968836/visual-studio-is-opening-web-page-in-notepad

Cross-Site Publishing vs Traditional Publishing in SharePoint 2013

Introduction

Cross-Site Publishing (also known as Product Catalog) is a SharePoint Server 2013 feature that allows to use one or more site collections to author content, and one or more site collections to control the design of the site and the display of the content. See: Plan for cross-site publishing in Sharepoint Server 2013.

cross-site-publishing

This is a great feature that addresses some of the limitations of the traditional publishing, where content is only available in one site collection. One of the great benefits is the ability to author content in one place and have multiple site collections acting as presentation layers, even in a different web application. However, in a real case scenario, there are usually many requirements involved, other than simply showing read-only data to end users. In most case there is need for certain level of interaction, allowing users to rate content, submit feedback, etc.

The purpose of the table below is to compare the Traditional Publishing and Cross-Site Publishing methods, and to illustrate some of the out-of-the-box features that are available for each one. This could be useful during the analysis phase to determine what can be accomplished with purely configuration, and what areas will required custom coding.

Traditional Publishing vs Cross-Site Publishing (Product Catalog)

Out-of-the-Box features Traditional Publishing Cross-Site Publishing
Rich Edit capability for authoring content Yes Yes
Content Approval Workflow Yes Yes
Content is available to other site collections and farms. No Yes
Managed Navigation Yes Yes
Article feedback capability Yes Yes
Rating capability Yes No
Related content with Summary Links Yes No
Related content with Lookup fields (multiple) Yes No
Related content with third-party controls Yes Yes
Reusable Content with Automatic Update Yes No
Related content with Item-Catalog URL fields No Yes
User-friendly URL for category pages Yes Yes
User-friendly URL for article pages No Yes
Search-Driven Content Yes Yes
Custom Search Refiners Yes Yes
Custom Search Results Yes Yes
Custom Search Previews Yes Yes
Search Analytics Reports Yes Yes
Custom message for No Search Results Yes Yes
Display Selected Search Refiners on Top No No

Note: This list was created based on a particular project scope, and it doesn’t include all available features.

Summary

Cross-Site Publishing was a great addition to the SharePoint 2013 feature set and it extends the range of possible architectures for content publishing scenarios. The purpose of this post is to provide a brief analysis on what built-in capabilities are available and what customizations will be required, so the solution can be planned accordingly.

Reference

Overview of Cross-Site Publishing in SharePoint Server 2013

Plan for cross-site publishing in Sharepoint Server 2013