Wednesday 7 September 2016

Setup of SharePoint 2013 High-Trust On-premise Add-In Developer / Production environment

Setup of SharePoint 2013 High-Trust On-premise Add-In Developer / Production environment

Set-up the remote IIS site

-This is the site the add-in will connect with for data and / or interactivity
-No SharePoint components are necessary aside from the web project being deployed as an asp.net site.
-Authentication is managed via certificates (cer/pfx)

Step 1 – enable IIS services including https, certificates and management services




 Step 2 – Install the certificate for the site

-This can be a self-signed domain certificate (issued from your development farm Certificate Authority) or from a Certificate Vendor. The certificate should include all certificates in the chain and if issued from the local CA, needs to have the CA certificate in the Local -> Trusted Root Authorities location)

Import the certificate into IIS on the remote web server with these steps:
  1. In IIS Manager, select the ServerName node in the tree view on the left.
  2. Double-click the Server Certificates icon.
  3. Select Import in the Actions pane on the right.
  4. On the Import Certificate dialog, use the browse button to browse to the .pfx file, and then enter the password of the certificate.
  5.  If you are using IIS Manager 8, there is a Select Certificate Store drop down. Choose Personal. (This refers to the "personal" certificate storage of the computer, not the user.)
  6. If you don't already have a cer version, or you do but it includes the private key, enable Allow this certificate to be exported.
  7. Click OK

  1. Open MMC (Start -> mmc) add the local certificate snap-in
  2. Navigate to Certificates (Local Computer) -> Personal -> Certificates
  3. Double-click the certificate added above and then open the Details tab
  4. Select the Serial Number field to make the entire serial number is visible in the box.
  5. Copy the serial number (ctrl+C) to a text file – remove all spaces (including the lead and trailing spaces)
  6. Copy the Authority Key Identifier value to the text file, remove spaces and convert to GUID format (xxxxxxxx-xxxx-xxxx-xxxxxxxxxxx)
  7. Save the text file to a SharePoint Server accessible location (i.e. network share)
  8. Copy the pfx certificate to the same location

Step 3 - create a cer version of the certificate

- This contains the public key of the remote web server and is used by SharePoint to encrypt requests from the remote web application and validate the access tokens in those requests. It is created on the remote web server and then moved to the SharePoint farm.
1. In IIS manager, select the ServerName node in the tree view on the left.
2. Double-click Server Certificates.
3. In Server Certificates view, double-click the certificate to display the certificate details.
4. On the Details tab, choose Copy to File to launch the Certificate Export Wizard, and then choose Next.
5. Use the default value No, do not export the private key, and then choose Next.
6. Use the default values on the next page. Choose Next.
7. Choose Browse and browse to the folder the serial text file was saved to above.
8. Choose Next.
9. Choose Finish.

Step 4 – Create an IIS site to use 443 / SSL and the certificate created

  1. In IIS Manager, right-click the Sites folder and select Add Website
  2. Give the site a meaningful name (no spaces or special characters)
  3. Select a location accessible to IIS processes and the app pool user (I use a new subdirectory of intepub as it inherits permissions necessary)
  4. Under Bindings, select HTTPS in the Type drop down list.
  5. Select All Unassigned in the IP address drop down list or specify the IP address if desired.
  6. Enter the port in the Port text box. If you specify a port other than 443, when you registered the SharePoint Add-in on appregnew.aspx then you have to use the same number there.
  7. In the Host Name, put in the URL name used (i.e. mysub.mysite.com) and check Require SNI
  8. In the SSL certificate drop down list, select the certificate that you used above.
  9. Click OK.
  10. Click Close.

You may get a warning, if so select the Default Web Site and click on bindings in the right menu. Make sure the https bindings for IP address ‘All Unassigned’ are set and bound to a star certificate (default for the server). Also make sure this binding does not require SNI.

Step 5 - configure authentication for the web application

When a new web application is installed in IIS, it is initially configured for anonymous access, but almost all high-trust SharePoint Add-in are designed to require authentication of users, so you need to change it. In IIS Manager, highlight the web application in the Connections pane. It will be either a peer website of the Default Web Site or a child of the Default Web Site.
  1. Double-click the Authentication icon in the center pane to open the Authentication pane.
  2. Highlight Anonymous Authentication and then click Disable in the Actions pane.
  3. Highlight the authentication system that the web application is designed to use and click Enable in the Actions pane.
  4. If the web application's code uses the generated code in the TokenHelper and SharePointContext files without modifications to the user authentication parts of the files, then the web application is using Windows Authentication, so that is the option you should enable.
  5. If you are using the generated code files without modifications to the user authentication parts of the files, you also need to configure the authentication provider with the following steps:
  6. Highlight Windows Authentication in the Authentication pane.
  7. Click Providers.
  8. In the Providers dialog, ensure that NTLM is listed above Negotiate.
  9. Click OK.

Step 6 – Enable App Pool profile loading

Not entirely necessary in all situations, this does head off an issue I’ve encountered repeatedly.
  1. Select Application Pools in the left menu, highlight the app pool use by the web application created in step 4, select Advanced Settings in the right hand menu.
  2. Scroll down to the Load User Profile and set to True (also verify the App pool account is the desired account for the application)

NOTE

IF the SharePoint Site does not use a purchased certificate from a certificate vendor in the Trusted Certificate Store the app will authenticate to a point and return the result ‘The remote certificate is invalid according to the validation procedure’. Adding the certificate and root chain to the Trusted Root Certificate Store has mixed results and for this reason it is recommended that the SharePoint site use a purchased certificate from a trusted vendor.

Restart the IIS site.

Setup the SharePoint Server to use the Add-in

Configure SharePoint to use the certificate

The procedures in this section can be performed on any SharePoint server on which the SharePoint Management Shell is installed.

  1. Create a folder and be sure that the add-in pool identities for the following IIS add-in pools have Read right to it:
    - SecurityTokenServiceApplicationPool
    - The add-in pool that serves the IIS web site that hosts the parent SharePoint web application for your test SharePoint website.

    I simply add ‘everyone’ to the folder and give full access to this user. The folder is deleted after this process is completed anyway so not a big deal in a dev environment.
  2. Move (cut -> paste) the .cer file and the cert serial text file from the remote web server to the folder you just created on the SharePoint server.
  3. The following procedure configures the certificate as a trusted token issuer in SharePoint. It is performed just once (for each high-trust SharePoint Add-in).
    - Open the SharePoint Management Shell as an administrator and run the following script:
Add-PSSnapin Microsoft.SharePoint.PowerShell
cls
$rootAuthName = "rootauthname"
# a unique and meaningful name
$tokenIssuerName = "tokenissuername"
# a unique and meaningful name
$publicCertPath = C:\path\to\certs\cert.cer
# the path to the certificate created above
$certAuthorityKeyID = "GUID"
# obtained from the certificate properties Authority Key Identifier (Step 3)
$specificIssuerId = $certAuthorityKeyID.ToLower()

$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($publicCertPath)
New-SPTrustedRootAuthority -Name $rootAuthName -Certificate $certificate
$realm = Get-SPAuthenticationRealm
$fullIssuerIdentifier = $specificIssuerId + “@” + $realm
New-SPTrustedSecurityTokenIssuer -Name $tokenIssuerName -Certificate $certificate –RegisteredIssuerName $fullIssuerIdentifier –IsTrustBroker

iisreset

In the event something goes awry, the following will remove the two objects created by this script:
Get-SPTrustedRootAuthority $rootAuthName
Remove-SPTrustedRootAuthority $rootAuthName
Get-SPTrustedSecurityTokenIssuer $tokenIssuerName

Remove-SPTrustedSecurityTokenIssuer $tokenIssuerName 

Delete the cer file from the file system of the SharePoint server.
Restart IIS (the server can take up to 24 hours to recognize these objects otherwise)

Register a new Add-In in SharePoint

Open your web application in a browser.
Navigate to <site root>/_layouts/15/appregnew.aspx
Generate an App ID, generate an App Secret, give it a display name (Title), enter the app domain (the iis address used above). Leave the redirect URI blank.

Copy the App Id to a text file and save. This is used wherever Client ID is referenced later!

Create the Add-In Project in Visual Studio

Fire up Visual Studio. (I’m using 2015 community edition in this example)
Create a new project using the SharePoint Add-in template


  • Select the local SharePoint site to use for debugging, select ‘provider-hosted’, click next.
  • Select SharePoint 2013, click next.
  • Pick a web forms or MVC application type (I take the Web forms version)
  • Select ‘Use a certificate …’, browse to the location of the certificate pfx file used in step 2, enter the password for the pfx.
  • In the Issuer ID enter the value generated in the register the add-in step above (the App ID)
  • Click Finish.
  • View the App.manifest in code
  • Paste in the Issuer ID (App ID) in the ClientId value



  • Save and close. Open in design view.
  • Change the Start Page to reflect the location of the remote web application:



  • On the permissions tab, grant the add-in permissions needed (for this example I’m giving the add-in full control to the web)



  • Open the Web.config in the Web Project
  • In the System.web section, add the following key to enable meaningful error messages:



  • In the appSettings section fill in the ClientId and IssuerId with the App Id generated in AppRegNew above, ClientSigningCertificateSerialNumber: (You will need to add this key) This is the serial number of the certificate from the text file created in step 3. There should be no spaces or hyphens in the value.

<appSettings>
  <add key="ClientID" value="guid" />
  <add key="ClientSigningCertificateSerialNumber" value="serial" />
  <add key="IssuerId" value="guid" />
</appSettings>

The Office Developer Tools for Visual Studio may have added add-in setting keys for ClientSigningCertificatePath and ClientSigningCertificatePassword. These are not used in a production add-in and should be deleted. HOWEVER, the add-in project will not publish properly without them, so the trick is to publish and deploy the web project and add-in, then remove them from the published web site web.config file.

Modify the TokenHelper file

The TokenHelper.cs (or .vb) file generated by Office Developer Tools for Visual Studio needs to be modified to work with the certificate stored in the Windows Certificate Store and to retrieve it by its serial number.

  • Near the bottom of the #region private fields part of the file are declarations for ClientSigningCertificatePath, ClientSigningCertificatePassword, and ClientCertificate. Remove all three.
  • In their place, add the following line:

     private static readonly string ClientSigningCertificateSerialNumber = WebConfigurationManager.AppSettings.Get("ClientSigningCertificateSerialNumber");

  • Find the line that declares the SigningCredentials field. Replace it with the following line:

     private static readonly X509SigningCredentials SigningCredentials = GetSigningCredentials(GetCertificateFromStore());

  • Go to the #region private methods part of the file and add the following two methods:

private static X509SigningCredentials GetSigningCredentials(X509Certificate2 cert)
{
    return (cert == null) ? null
                          : new X509SigningCredentials(cert,
                                                       SecurityAlgorithms.RsaSha256Signature,
                                                       SecurityAlgorithms.Sha256Digest);
}

private static X509Certificate2 GetCertificateFromStore()
{
    if (string.IsNullOrEmpty(ClientSigningCertificateSerialNumber))
    {
        return null;
    }

    // Get the machine's personal store
    X509Certificate2 storedCert;
    X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);

    try
    {
        // Open for read-only access              
        store.Open(OpenFlags.ReadOnly);

        // Find the cert
        storedCert = store.Certificates.Find(X509FindType.FindBySerialNumber,
                                             ClientSigningCertificateSerialNumber,
                                             true)
                       .OfType<X509Certificate2>().SingleOrDefault();
    }
    finally
    {
        store.Close();
    }

    return storedCert;
}

Package the remote web application


  • In Solution Explorer, right-click the web application project (not the SharePoint Add-in project), and select Publish.
  • On the Profile tab, select New Profile on the drop-down list.
  • When prompted, give the profile an appropriate name.
  • On the Connection tab, select Web Deploy Package in the Publish method drop-down list.
  • For Package location, use any folder. To simplify later procedures, this should be an empty folder. The subfolder of the bin folder of the project is typically used.
  • For the site name, enter the name of the IIS website that will host the web application. Do not include protocol or port or slashes in the name; for example, "PayrollSite." If you want the web application to be a child of the Default Web Site, use Default Web Site/<website name>; for example, "Default Web Site/PayrollSite." (If the IIS website does not already exist, it is created when you execute the Web Deploy package in a later procedure.)
  • Click Next.
  • On the Settings tab select either Release or Debug on the Configuration drop down.
  • Click Next and then Publish. A zip file and various other files that will be used in to install the web application in a later procedure are created in the package location

To create a SharePoint Add-in package


  • Right-click the SharePoint Add-in project in your solution, and then choose Publish.
  • In the Current profile drop-down, select the profile that you created in the last procedure.
  • If a small yellow warning symbol appears next to the Edit button, click the Edit button. A form opens asking for the same information that you included in the web.config file. This information is not required since you are using the Web Deploy Package publishing method, but you cannot leave the form blank. Enter any characters in the four text boxes and click Finish.
  • Click the Package the add-in button. (Do not click Deploy your web project. This button simply repeats what you did in the final step of the last procedure.) A Package the add-in form opens.
  • In the Where is your website hosted? text box, enter the URL of the domain of the remote web application. You must include the protocol, HTTPS, and if the port that the web application will listen for HTTPS requests is not 443, then you must include the port as well; for example, https://MyServer:4444. (This is the value that Office Developer Tools for Visual Studio uses to replace the ~remoteAppUrl token in the add-in manifest for the SharePoint Add-in.)
  • In the What is the add-in's Client ID? text box, enter the client ID that was generated on the appregnew.aspx page, and which you also entered in the web.config file.
  • Click Finish. Your add-in package is created.


Wednesday 11 May 2016

Set Up Domain Name which uses a Dynamic IP

Set Up Domain Name which uses a Dynamic IP 

The Problem

I have finally broken down and purchased a couple domain names (to cover <mydomain>.com and <mydomain>.ca) - they were quite simply too cheap not to buy. The problem is I have residential high-speed service which uses dynamic IP assignment to my modem and even though I am only using the dns/ip for development and testing of work at this point, I need the name to resolve to a web server on a Hyper-V virtual machine and to update the ip in my nameserver records whenever it changes. Static record pointing to a dynamic IP which resolves internally to a dynamic IIS site on a dynamic Hyper-v server. Wee!

The Approach

The first step is to find a free or cheap way to automate my A records. A bit of Googling turned up a couple of options. I found DynDNS to be pretty expensive for my needs (it cost more than the price of the domains at the time of writing ...) so opted for EntryDNS. The one time donation / fee is very appealing and the few reviews I checked out gave this site high praise. The whole point of this service is to manage A records and to allow dynamic update of IPs using basic url requests. So I signed up.

Next I went to the registar of my domains and dug through the documentation until I found how to assign third party name servers to my record and changed them to ns1.entrydns.net and ns2.entrydns.net ... then waited for a full day for this change to propogate (I guess there was a reason for the sale ...)

In the mean time I set up my router, modem and server (hyper-v host running Win 10 pro) to port forward requests on port 80 through 443:
Modem - dynamic external IP, internal dynamic (192.168.x.x) IP assignment. Setup port forwarding of all incoming port 80 to 443 requests to the IP of my internal sub-net Router's IP (192.168.0.1)
Router, dynamic external IP (192.168.0.1 preferred), internal static IP ranges 10.0.0.X/24  I set up port forwarding (192.168.0.1/24:80) to my Hyper-v server (10.0.0.4/24:80).

As the server has Hyper-v on Windows 10 Pro, I decided to use a NAT on that server to manage Hyper-v development environments. A simple solution provided by reading:  https://4sysops.com/archives/native-nat-in-windows-10-hyper-v-using-a-nat-virtual-switch/. and then running the following PowerShell on the Windows 10 Server (as Admin):

$serverExternalIP = "10.0.0.4"
#Internal IP of the host Windows 10 server 

$serverInternalIP = "10.0.99.1
#the 10.0.99.x part must be the same as the $IPRange below and the .x part must be 1
$IPRange = "10.0.99.0/24"
# NAT sub-domain to be used with this connection

$DestinationHyperVServer = "10.0.99.225"
# this is the IP of the Hyper-v Server with the IIS service running (and SharePoint in my case)

# Run Get-NetNat first to see if you already have a defined net service running
New-VMSwitch -Name "NAT" -SwitchType NAT -NATSubnetAddress $IPRange
New-NetIPAddress –IPAddress $serverInternalIP -PrefixLength 24 -InterfaceAlias "vEthernet (NATSwitch)"
# change the IP address of the host, Windows 10 Server to match the range with the first IP
New-NetNat –Name NAT –InternalIPInterfaceAddressPrefix $IPRange

Add-NetNatStaticMapping -NatName “NAT” -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress $DestinationHyperVServer -InternalPort 80 -ExternalPort 80
Add-NetNatExternalAddress -NatName "NAT" -IPAddress $serverExternalIP -PortStart 80 -PortEnd 443
Get-NetNatExternalAddress
Get-NetNatStaticMapping 
Get-NetNat

Next I set up a basic Hyper-V farm consisting of 3 servers (using the 2012 server r2 developer trial) using the Virtual Switch NAT created by the PowerShell as follows:

AD (dns and Active Directory Services for my two purchased domains - all settings in PowerShell to enable quick building) IP set internally to 10.0.99.2, Subnet 255.255.255.0 and Gateway set to the Host system internal NAT IP (10.0.99.1 in this case)
CLS set to use 10.0.99.3 in DNS. 

IP4 settings of DNS servers set to Google (8.8.8.8 and 8.8.4.4) with the DNS suffix of my domain appended in advanced dns settings (not sure if this is necessary but it doesn't hurt so ...)

Then I joined the server to the newly created Domain.

Left the window open to allow record adding when other servers join the domain. 

IIS (IIS/SharePoint development server)
Pretty much the same IP settings except the IP address was set to 10.0.99.225 and the Alternate DNS server ip set to 10.0.99.2 (so the internal DNS service resolves internal names). IIS initially set up to just HTTP and no bindings.

SQL (Sql Server 2012) IP in the unused 10.0.99.x range, same settings as the IIS server other than that.

Testing is pretty simple - browse to localhost and 10.0.99.225 from the IIS machine - both should produce a site. Browse to 10.0.99.225 on SQL and AD - should see the same site. From the hyper-v host, browse to 10.0.99.225, 10.0.0.4 (whatever it's internal ip address is) and whatever your external IP is (from the modem) - all three should open the same site. 

Once the nameservers have refreshed and your whois record shows them as being used, test the domain name on both the server and host level.

For SharePoint I set up AAM for each of the subsites to go to different applications at the same IP and this worked well but will not go into detail here.

Now came the fun part. I created the following PowerShell script to handle updating the dynamic IP recorded on the EntryDNS service. This uses the Doman record Tokens in a string array - I'll likely change this to read an input file of either CSV or XML delimited values.

The global logic is as follows:
Get the current public internet IP (from checkip.dyndns.com), compare to the last recorded IP address registered and if different update the EntryDNS entry and the IP record. Then set up a timed job to run the script on a regular interval.

The code:
#DNS Codes for each domain entered in EntryDNS
    $Codes = ("put the codes here")
#this is a simple text file with a single IP address inside it Change to any location you like for a temp file (C:\temp or something)
    $IpFile = "C:\IP.txt"
    
    # set to true to debug the settings. Logging is very verbose so not a good idea to leave on.
    $logging = $false
    $logpath = "C:\testlogs\"

function UpdateEntryDNS ($Code, $currentIP) {
    try {
        $req = "https://entrydns.net/records/modify/$Code"+"?ip="+"$currentIP"
        # Builds the url to update the IP address
        $r = Invoke-WebRequest $req
        if ($logging -eq $true) {
            $dt = Get-Date -Format g | foreach {$_ -replace ":", "."}
            $fn = "$logpath"+$dt+$Code+".txt"

            # if the path doesn't exist, create the directory
            if ((Test-Path($logpath)) -eq $false) {mkdir $logpath}

            # output file and populate with details
            # interval of checking should be at least 1 min apart or files get overwritten
            $Logoutput =  "$dt "+ $r.StatusCode +" " +$r.Content+" "+$req  | Out-File $fn -Force
        }
        Write-Host "Updated"
        }
        catch {
    
        }
}    
function Get-ExternalIP {
# the following parses the returned page
# changing the source will mean parsing the returned content appropriately.
    $Ip =(Invoke-WebRequest "checkip.dyndns.com").Content
    $Ip2 = $Ip.ToString() 
    $ip3 = $Ip2.Split(" ") 
    $ip4 = $ip3[5] 
    $ip5 = $ip4.replace("</body>","") 
    $curIP1 = $ip5.replace("</html>","")
    $curIP = ($curIP1.replace("`n","")).Trim()
    return $curIP
}


$currentIP = Get-ExternalIP

# if the file exists, read it - otherwise run once with null IP and create file with current IP
if (Test-Path($IpFile)) {
    $content = ([IO.File]::ReadAllText($IpFile)).Trim()
}
else {
    $content = "0.0.0.0"
}
# New IP
if ([IPAddress]$content -ne [IPAddress]$currentIP) {
    Write-Host "Updating ... "
    Write-Host "Old IP: " $content
    Write-Host "Current IP: " $currentIP
    # (Over)Write new Ip Addres to file 
    $currentIP | Out-File $IpFile -Force

    # update dns for all in list
    ForEach ($Code in $Codes) {UpdateEntryDNS $Code $currentIP}
}
# No New IP
else {
    Write-Host "Equal"
}

Running the above will create the file C:\IP.txt which contains the most current IP address, a folder located at C:\testlogs\ if logging is changed to $true with a timestamp+code named file for each update attempted - inside each is the return code and content message (200 and OK are the desired results here) as well as the combined URL used for debugging purposes.

Now that this has been created and tested, save it to a good location for custom scripts on the Windows 10 Host system (C:\powershellscripts for instance) and set up a timed event to run the script.

Open "Administrative Tools' and 'Task Scheduler' and click 'Create Task'


Give it a meaningful name

Create a trigger and set up the timing as suits your environment (no more than once every 5 minutes is recommended and remember to enable 'Stop Task if it runs for ...' to kill any hanging processes eventually. Click OK.


Next Add the Action. Action Tab -> New -> select Start a program -> Find the script above, click OK.

Click ok and ok and close all those nasty windows!

Voila! Minimal downtime when the IP changes.

Note that one still needs to add DNS entries for subsites in SharePoint (in AAM) and in DNS (EntryDNS which adds more update codes to the list) and in IIS (to some degree). Ideally a Site creation script used in this environment will do all of that in one step.

The Result

A relatively simple PowerShell script which can be reused on a timed basis to keep my IP up to date with my server.

Going Forward

Potentially use a csv / text file to manage multiple domains, sub-sites and keys to update the A record of each (in the absence of figuring a way to use a wildcard).
If other free-ish services are available, add routines for each so a simple toggle can be selected in the script for parsing of both the IP resolution and the entryDNS site.

If this helped, let me know! If it is broken, Let me know!

Tuesday 5 April 2016

PowerShell SharePoint Backup and Restore made automatic

PowerShell SharePoint Backup and Restore

The Problem

I need an automatic method of backing up and restoring my SharePoint 2013 farm (on prem) to the same or a different farm (for creating a UAT/DEV environment refresh for instance ...)

The Approach

Use PowerShell to create a fast backup and create a fast restore.

The Result

First create a folder on one of the SharePoint servers (C:\SP_Backups in this example).

Next, the current logged in user needs to have securityadmin fixed server role on the SQL Server instance, db_owner fixed database role on all databases that are to be updated / accessed, be in the Administrators group on the server on which you are running the Windows PowerShell cmdlets ... in other words a member of the SPShellAdministrators so if this fails due to some kind of access or privvy errors - have the farm admin add the user running the script using:
Add-SPShellAdmin -UserName $domain\$user
When done the same Admin can remove the user using:
Remove-SPShellAdmin -UserName $domain\$user

Save the following to a ps1 file in the newly created folder above. Edit the path variables (highlighted below) and run it from an admin PS window (using .\<filename>.ps1)

# REQUIRES that the account running this has:
# - securityadmin fixed server role on the SQL Server instance
# - db_owner fixed database role on all databases that are to be updated.
# - Administrators group on the server on which you are running the Windows PowerShell cmdlets.
# can add by An administrator using the Add-SPShellAdmin cmdlet to grant permissions to use SharePoint 2013 cmdlets

# Add-SPShellAdmin -UserName $domain\$user
#
# load up the SharePoint snap-in if not loaded properly
if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null) {
    Add-PSSnapin "Microsoft.SharePoint.PowerShell"
}
# get some variables
$stamp = Get-Date -Format yyyy-MM-dd
# network share to copy completed backup to
$serverdir = "\\SERVER\Shared\baersland-farm-backup"
# local machine dir to create backup to
$dir = "\\SP13\SP_Backups\$stamp"
# create a new subfolder based on year-month-day
New-Item $dir -ItemType directory -Force

$ctdb = Get-SPContentDatabase
# create the restore powershell script in the newly created folder
Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Force -InputObject "# Restore Script created:  $stamp"
Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Append -InputObject "# Optional for moving to another farm -FarmCredentials domain\user -NewDatabaseServer newdbserver"
foreach ($_ in $ctdb) {
    $name = $_.Name
    $guid = $_.Id
    Write-Host "processing ... $name"
    Backup-SPFarm -Directory $dir -BackupMethod Full -Item $_.Name -Verbose
    Write-Host "backup of $name success!!"
    Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Append -InputObject "Restore-SPFarm -Directory $dir -RestoreMethod Overwrite -Item $name -Verbose -Confirm $false"
}
# move from local to network share
Copy-Item -Path $dir -Destination $serverdir -Recurse -Force
Clear-Host
Write-Host "░░░░░░░░░░░░░░░░░░░░░█████████
░░███████░░░░░░░░░░███▒▒▒▒▒▒▒▒███
░░█▒▒▒▒▒█░░░░░░░███▒▒▒▒▒▒▒▒▒▒▒▒▒███
░░░█▒▒▒▒▒▒█░░░░██▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░░░░█▒▒▒▒▒█░░░██▒▒▒▒▒██▒▒▒▒▒▒██▒▒▒▒▒███
░░░░░█▒▒▒█░░░█▒▒▒▒▒▒████▒▒▒▒████▒▒▒▒▒▒██
░░░█████████████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░░░█▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒▒▒██
░██▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒██▒▒▒▒▒▒▒▒▒▒██▒▒▒▒██
██▒▒▒███████████▒▒▒▒▒██▒▒▒▒▒▒▒▒██▒▒▒▒▒██
█▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒████████▒▒▒▒▒▒▒██
██▒▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░█▒▒▒███████████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░██▒▒▒▒▒▒▒▒▒▒████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒█
░░████████████░░░█████████████████" -ForegroundColor Green

The lat bit is just a fun way of knowing when the script and backup have worked.

Restoring is simple. Open the date stamped folder and run the script this script created <path>/1-run_me_to_restore.ps1. If moving to another environment with different user credentials for the farm and a different database server ... recomended ... edit the file by removing the comment and adding values for the required info:
# Optional for moving to another farm -FarmCredentials domain\user -NewDatabaseServer newdbserver

Going Forward

Use parameters and can the script to be called as part of a toolset.
Set up a script to add this as a scheduled task on one of the servers.
Add error handling (try/catch with a nasty catch ascii image!)

Monday 15 February 2016

Access Site Creation

Access Site Creation

The Problem

There is a need to allow many users to use the contents of an Access database concurrently and/or maintain an Access database across many users while maintaining data integrity. 

The Approach

Create a SharePoint site / Sub-site which integrates the Access data in tables to allow some users to use the Access Database file to manipulate and view data and others to use a SharePoint interface.

The Result

Create a SharePoint Sub-Site


From the main collection site (not the admin console) click the wheel and select Site Contents
In the Site Contents Page, under Subsites heading click on “new subsite” Link. 
 
In the settings section give the site a Title (this is displayed in the subsite), a description (optional), and a url name (access, or the name of the database the site will house WITHOUT spaces). In the Template select ‘Team Site’. Leave the rest of the settings as the default.

Remove the default ‘Get Started’ bar


Edit Site settings:

Edit the Site Permissions

Click Stop Inheriting Permissions

In the Set Up Groups … page click on create new groups for the three permission levels. The administrator account will be included in the Members and Owners groups – DO NOT REMOVE THIS. Add members to these groups as is appropriate (can be done after as well). Once the members are set click OK to save.

What this step does is create groups to enable access control to the specific database(s) that will be housed in this site. You can add/remove members of these groups at any time by editing the group memberships (Site Settings -> Groups, Select the one to edit and edit)
Go into the URL bar of the browser and select the root of the new sub-site you created (not the trailing slash or the _layouts… bits). Copy this (CTRL+C)

Open the desired Access Database
Click the Database Tools tab and click SharePoint from the menu




In the pop-up, paste the url you just copied and click Next.
Go back to the browser and refresh (you may need to log out and log back in at this point to refresh the session cookies).

Click on the Wheel and select Site Contents. You should see new tables in the list of items that correspond to the tables in the access database.

Close the Access Database and exit Access, saving when prompted.

Back to the browser window – click ‘Documents’. In the Documents List Click ‘New Document’ and browse to the database you just paired with SharePoint. Upload and add relevant information.

This allows users to download the database (from the Documents Library visible on the front page of this site) and when opened locally they can edit information in the database which is then paired with the site.

Optionally and more readily accessible you can give users links to the lists in SharePoint and they can edit information there which is linked to the Access database.

Security is managed at the site level. This will prevent people without membership in at least the members group from editing table data and anyone not in any of the groups cannot access the data at all.

Logging into Access as a SharePoint user (different from the desktop account)
To use SharePoint login you can login to Access as a different user.

-          Open the Access Database and click the File Ribbon Menu item.

-          Click the Switch User link and enter your SharePoint credentials and open the shared database.

Going Forward

Next steps would include building reports and other BI type of stuff using this data.

Notes / Bugs

In the event the upload / migration fails:

I've noticed a trend here which is conveniently not documented anywhere in Microsoft’s documentation – generally I love finding this kind of gem but it is getting old fast lol.

Access Date/Time fields which use just Time (i.e. 5:00 PM va 1/1/2000 5:00 PM) do not allow migration to SharePoint via the export vehicle. Thus these fields need to be changed to type Short Text before migration (they can be changed back to Date Time in the SharePoint ui after if this is required – gotta love data types!)

Access Memo fields and apparently Long Text allow really long strings. Unfortunately these are not properly transformed to the SharePoint Memo or Blob types but are instead transformed to Multi-line text rich text types which have a hard cap ~8k characters – including the ‘rich text’ characters which is seems to wrap around every letter …. So these types need to have the filed contents trimmed (potential for data loss here so I back up to Excel before migrating and compared after)

[EDIT Further]
After working on another recent project with a lot of user data and a database which has been around for several iterations (It started as an Access 2k database) I've discovered a slightly different method.
1 - create a copy of the database called 'data'

2 - open all tables in the source copy, delete all data, save as 'linked'. Run the link to SharePoint command as per the above. Some fields may need to be renamed - the error issues reported on an attempt are much more meaningful now (rather than the extremely vague 'read only' when the wizard runs into data type issues)
3 - Once linked test the data types by adding a sample record using direct SharePoint forms. Certain fields in certain tables will be duplicated (contacts, for instance, seems to duplicate all of the default fields). The trick is to find out which fields SharePoint requires and uses by default and which the Access database reports and forms are mapped to. Similarly, create a new record using the Access forms. Note any differences in which data is placed in which fields between the input methods.
4 - modify the forms queries to use the fields used by SharePoint.
5 - open the 'data' copy. Export each table to Access (use a new name - I simply prefix the table names with 'x' to indicate the transferred data sources)
6- arrange the 'x' table layout and the linked table layout so they show the correct fields in the same order in both tables (ensure all fields are shown in the linked table) - replace the destination ID field (whichever is the primary key) with the '_oldID' field in the linked database such that the 'x' table ID field is being pasted to the '_oldID' field in the linked table.
7 - Based on relationships and/or queries in the database, find the table which contains the core record (the table which does not rely on a record in another table)
8 - copy the entire table (in datasheet view) from the 'x' table and paste into the linked table. Any errors will be reported in a new table 'Paste Errors' and more to the point any acceptable records will be copied over.
9 - create an update query in Access to lookpu the -OldID' value from the linked table with data and return the ID value in that table and update the 'x' tables which consume that relationship. Once an 'x' table has the new ID's mapped to the linked table properly, repeat step 6, 7, 8, 9 until all tables are uploaded.
10 - review the 'paste errors' for issues. Typically this will be data issues where the record is an orphan (so the record should be removed or retired), a type cast issue (the record field needs to match the data format) or a lookup. I've discovered that fields with a choice or lookup that allow user typed entries will cause errors if the value is not in the field values list so modify the values list to include the distinct values (running a simple select distinct from <insert choice column name from x table> will create said value list). Add all possible values in the column definition and paste into the table all values again until there are no errors.
11 - copy the update queries and the column values to a text file as you go through this process.
12 - blow out the site*. Open a new copy of the database, apply the choice \ lookup values to the source columns.
13 - repeat all of the above steps.
14 - once there are no errors in copy / paste, test the data for proper display and the forms and reports to ensure they are using the correct fields throughout. Some reports and forms may need to be tweaked to use the proper fields in their queries.
15 - remove the 'x' tables from the Access database. Save and distribute the Access front end.

* Note that if a given table exceeds the 5k list limit the site will not be removable via the UI. In this case use PowerShell to blow out the site.

Given this discovery, moving forward I would start with ensuring all lookup / choice fields have all possible values in use in their definition prior to step 2. Also check to ensure there is a default value 'none' or similar and that all records have a value (or 'none' if they are blank in the source data). Additionally any required fields should have a value in the source data. I used a '-' character for any required but blank text fields and now() in any date type fields.

Hope this helps!