Showing posts with label SharePoint 2013. Show all posts
Showing posts with label SharePoint 2013. Show all posts

Sunday, 24 September 2017

Network Share Migration to SharePoint Online

Network Share Migration to SharePoint Online using SharePoint Online Management Shell via Azure Storage

The Problem

A large number and volume of files on a local network share need to be migrated to SharePoint Online. Moving this way is automated from click to upload complete and any document properties need to be carried with the files.

The Approach

Install SPMS https://www.microsoft.com/en-us/download/details.aspx?id=35588

Sign Up for and Create an Azure Storage Account (fees associated) https://manage.windowsazure.com/site.onmicrosoft.com
note the account name and account key in the script (the account name and the account key can be found at office 365 admin center > import > + > upload files over the network)
# Azure Storage information
$azureStorageAccountName = ""
$azureStorageAccountKey = ""

Create a local folder C:\temp or similar to hold the xml files and note the network document folder to upload in the sources of the script.
$srcPath = "\\network share path\"
$pkgPath = "C:\temp\SourcePkg"
$targetPkg = "C:\temp\TargetPkg"

Run SPMS 'as Administrator', change directory to the location of the script (I use c:\temp to keep it all together.
Run the script using .\pscriptname.ps1:

$cred365 = (Get-Credentials "adminuser@mysite.onmicrosoft.com")
-This will pop-up a window to get the password for the Office 365 Admin account and store the credentials object in the variable for later use. 

$credSP = (Get-Credentials "adminuser@mysite.sharepoint.com")
-This will pop-up a window to get the password for the SharePoint account and store the credentials object in the variable for later use. This account should have Site Administrator permissions on the destination site.

The Result

Import-Module Microsoft.PowerShell.Security
# SharePoint Site Administrator account
$credSP = (Get-Credential "admin@site.sharepoint.com")
# Office 365 Administrator account
$cred365 = (Get-Credential "admin@domain.onmicrosoft.com")
# Azure Storage information
$azureStorageAccountName = "storagename"
$azureStorageAccountKey = "accountkey"
#sources
$srcPath = "\\network share path\"
$pkgPath = "C:\temp\SourcePkg"
$targetPkg = "C:\temp\TargetPkg"
#subsite in SharePoint Online
$targetWeb = "https://site.sharepoint.com/subsiterootpath/"
#target library in the above site
$targetLibrary = "library name"
#Create package and upload to Azure Temporary Storage 
New-SPOMigrationPackage -SourceFilesPath $srcPath -OutputPackagePath $pkgPath -NoAzureAdLookup
#Ready package for migration from Azure Storage to Destination
$sourceinAZstore = Set-SPOMigrationPackageAzureSource -SourceFilesPath $srcPath -SourcePackagePath $targetPkg -AccountName $azureStorageAccountName -AccountKey $azureStorageAccountKey -AzureQueueName "fs2sp"
#Submit above Package for Migration to Destination
Submit-SPOMigrationJob -TargetWebUrl $targetWeb -MigrationPackageAzureLocations $sourceinAZstore -Credentials $credSP
#The fully qualified URL and SAS token representing the Azure Storage Reporting Queue where import operations will list events during import.
$myQueueUri = <uri to azure report queue>
# In testing this doesn't appear to do much but is required to prevent errors
Get-SPOMigrationJobProgress -AzureQueueUri $myQueueUri

Going Forward

Script could be called as a function with the variables passed as parameters to allow uploading from multiple locations to multiple site libraries as a batch.

Error capturing, logging could be expanded and implemented.

Wednesday, 13 September 2017

Multi-select Choice or Lookup fields used in Cascading Dropdowns

Multi-select Choice or Lookup fields used in Cascading Dropdowns

The Problem

Cascading multi-select fields. Input form has multiple selection for Types and Specialties for a contact. The desire is to have another form in another library lookup the Types and Specialties in dropdown lists, allowing one type selection and a related, filtered specialty which then filters the list of contacts meeting both the types and specialty selections. Add in a bit of a twist, if the Types match a specific criteria then the Specialties are pulled from a different multi-select column on the same source list.

Lots of potential issues here!

The Approach

Create a new list with Type and Specialty columns of type ‘single line of text’
On  the Contact list (with the two multiple select columns) add a workflow triggered onchange and on new.
Set a variable to the Types field values of type Lookup, comma delimited. Loop through the variable, looking for ‘,’ and pull the value from it to a substring variable, trim out the value and comma from the lookup variable. Enter a loop, use the substring to determine the cascade lookup field to use and set a variable to the values of the second tier lookup, pull out the value into substring2, enter a loop(2) and use substring and substring2 in a REST query to the created list – test for a match of both fields. If it exists, move to the next loop(2) value, if it does not match, add an entry to the list. This creates one entry for each type/specialty combination in the new list. At the end of the on-change workflow, set the value of another hidden text  field in the contacts list to the display name value of the current record.

On the form to consume the cascading dropdowns add three fields of type lookup – point to the new list Type field and specialty field (do not allow multiple selections!) and a third lookup to the contact list hidden text field for name.

Add a content editor web part to the edit form. Use this to link to a javascript file in site assets. Import the SPServices library and the jQuery library in the head of the linked javascriptfile. On the first field (Types) use the SPServices.SPFilterDropdown and directly after it add the following to remove duplicates from the dropdown results:
               var usedTypes = {};
               $("[title='Type'] option").each(function () {
                              if (usedTypes[this.text]) {
                                             $(this).remove();
                              } else {
                                             usedTypes[this.text] = this.text;
                              }
               });

Add a SPServices.SPCascadeDropdowns call for the Type->Specialty and another for the Specialty->Specialist cascades. Parent column and child column are from the form’s list, the relationship columns and field names from the lookup list (new list created above). In the specialist lookup use the contacts list / hidden name field as the lookup source.

The Result

List: Contact
Types (multiple select)
Specialty (Multiple Select)
Name - hidden text
- workflow that updates cascadelookups on new/change

List: cascadelookups
Type (single text)
Specialty (single text)

List: Forms
Type: Lookup to cascadelookups - cascades to:
Specialty:  Lookup to cascadelookups - cascades to:
Specialist: Lookup to Contact Name hidden text field
- javascript included to work the cascade magic.

Going Forward

The update of the lookup list is addition only – if the users remove a specialty or type from a contact and there are no longer any contacts which match them the type field and specialty fields will still show the choices. I haven’t found a graceful way of removing these entries but a brute force method to remove null entries is to delete everything in the lookup list and then go to the contacts list and bulk edit the records, deleting the hidden name field values – triggering the workflow which then adds entries for each unique type/specialty pair. I’m sure there is a better way but I’ve not needed it yet.

Hopefully this helps someone a bit in using multiple select fields in cascading lookups.

Wednesday, 7 September 2016

Setup of SharePoint 2013 High-Trust On-premise Add-In Developer / Production environment

Setup of SharePoint 2013 High-Trust On-premise Add-In Developer / Production environment

Set-up the remote IIS site

-This is the site the add-in will connect with for data and / or interactivity
-No SharePoint components are necessary aside from the web project being deployed as an asp.net site.
-Authentication is managed via certificates (cer/pfx)

Step 1 – enable IIS services including https, certificates and management services




 Step 2 – Install the certificate for the site

-This can be a self-signed domain certificate (issued from your development farm Certificate Authority) or from a Certificate Vendor. The certificate should include all certificates in the chain and if issued from the local CA, needs to have the CA certificate in the Local -> Trusted Root Authorities location)

Import the certificate into IIS on the remote web server with these steps:
  1. In IIS Manager, select the ServerName node in the tree view on the left.
  2. Double-click the Server Certificates icon.
  3. Select Import in the Actions pane on the right.
  4. On the Import Certificate dialog, use the browse button to browse to the .pfx file, and then enter the password of the certificate.
  5.  If you are using IIS Manager 8, there is a Select Certificate Store drop down. Choose Personal. (This refers to the "personal" certificate storage of the computer, not the user.)
  6. If you don't already have a cer version, or you do but it includes the private key, enable Allow this certificate to be exported.
  7. Click OK

  1. Open MMC (Start -> mmc) add the local certificate snap-in
  2. Navigate to Certificates (Local Computer) -> Personal -> Certificates
  3. Double-click the certificate added above and then open the Details tab
  4. Select the Serial Number field to make the entire serial number is visible in the box.
  5. Copy the serial number (ctrl+C) to a text file – remove all spaces (including the lead and trailing spaces)
  6. Copy the Authority Key Identifier value to the text file, remove spaces and convert to GUID format (xxxxxxxx-xxxx-xxxx-xxxxxxxxxxx)
  7. Save the text file to a SharePoint Server accessible location (i.e. network share)
  8. Copy the pfx certificate to the same location

Step 3 - create a cer version of the certificate

- This contains the public key of the remote web server and is used by SharePoint to encrypt requests from the remote web application and validate the access tokens in those requests. It is created on the remote web server and then moved to the SharePoint farm.
1. In IIS manager, select the ServerName node in the tree view on the left.
2. Double-click Server Certificates.
3. In Server Certificates view, double-click the certificate to display the certificate details.
4. On the Details tab, choose Copy to File to launch the Certificate Export Wizard, and then choose Next.
5. Use the default value No, do not export the private key, and then choose Next.
6. Use the default values on the next page. Choose Next.
7. Choose Browse and browse to the folder the serial text file was saved to above.
8. Choose Next.
9. Choose Finish.

Step 4 – Create an IIS site to use 443 / SSL and the certificate created

  1. In IIS Manager, right-click the Sites folder and select Add Website
  2. Give the site a meaningful name (no spaces or special characters)
  3. Select a location accessible to IIS processes and the app pool user (I use a new subdirectory of intepub as it inherits permissions necessary)
  4. Under Bindings, select HTTPS in the Type drop down list.
  5. Select All Unassigned in the IP address drop down list or specify the IP address if desired.
  6. Enter the port in the Port text box. If you specify a port other than 443, when you registered the SharePoint Add-in on appregnew.aspx then you have to use the same number there.
  7. In the Host Name, put in the URL name used (i.e. mysub.mysite.com) and check Require SNI
  8. In the SSL certificate drop down list, select the certificate that you used above.
  9. Click OK.
  10. Click Close.

You may get a warning, if so select the Default Web Site and click on bindings in the right menu. Make sure the https bindings for IP address ‘All Unassigned’ are set and bound to a star certificate (default for the server). Also make sure this binding does not require SNI.

Step 5 - configure authentication for the web application

When a new web application is installed in IIS, it is initially configured for anonymous access, but almost all high-trust SharePoint Add-in are designed to require authentication of users, so you need to change it. In IIS Manager, highlight the web application in the Connections pane. It will be either a peer website of the Default Web Site or a child of the Default Web Site.
  1. Double-click the Authentication icon in the center pane to open the Authentication pane.
  2. Highlight Anonymous Authentication and then click Disable in the Actions pane.
  3. Highlight the authentication system that the web application is designed to use and click Enable in the Actions pane.
  4. If the web application's code uses the generated code in the TokenHelper and SharePointContext files without modifications to the user authentication parts of the files, then the web application is using Windows Authentication, so that is the option you should enable.
  5. If you are using the generated code files without modifications to the user authentication parts of the files, you also need to configure the authentication provider with the following steps:
  6. Highlight Windows Authentication in the Authentication pane.
  7. Click Providers.
  8. In the Providers dialog, ensure that NTLM is listed above Negotiate.
  9. Click OK.

Step 6 – Enable App Pool profile loading

Not entirely necessary in all situations, this does head off an issue I’ve encountered repeatedly.
  1. Select Application Pools in the left menu, highlight the app pool use by the web application created in step 4, select Advanced Settings in the right hand menu.
  2. Scroll down to the Load User Profile and set to True (also verify the App pool account is the desired account for the application)

NOTE

IF the SharePoint Site does not use a purchased certificate from a certificate vendor in the Trusted Certificate Store the app will authenticate to a point and return the result ‘The remote certificate is invalid according to the validation procedure’. Adding the certificate and root chain to the Trusted Root Certificate Store has mixed results and for this reason it is recommended that the SharePoint site use a purchased certificate from a trusted vendor.

Restart the IIS site.

Setup the SharePoint Server to use the Add-in

Configure SharePoint to use the certificate

The procedures in this section can be performed on any SharePoint server on which the SharePoint Management Shell is installed.

  1. Create a folder and be sure that the add-in pool identities for the following IIS add-in pools have Read right to it:
    - SecurityTokenServiceApplicationPool
    - The add-in pool that serves the IIS web site that hosts the parent SharePoint web application for your test SharePoint website.

    I simply add ‘everyone’ to the folder and give full access to this user. The folder is deleted after this process is completed anyway so not a big deal in a dev environment.
  2. Move (cut -> paste) the .cer file and the cert serial text file from the remote web server to the folder you just created on the SharePoint server.
  3. The following procedure configures the certificate as a trusted token issuer in SharePoint. It is performed just once (for each high-trust SharePoint Add-in).
    - Open the SharePoint Management Shell as an administrator and run the following script:
Add-PSSnapin Microsoft.SharePoint.PowerShell
cls
$rootAuthName = "rootauthname"
# a unique and meaningful name
$tokenIssuerName = "tokenissuername"
# a unique and meaningful name
$publicCertPath = C:\path\to\certs\cert.cer
# the path to the certificate created above
$certAuthorityKeyID = "GUID"
# obtained from the certificate properties Authority Key Identifier (Step 3)
$specificIssuerId = $certAuthorityKeyID.ToLower()

$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($publicCertPath)
New-SPTrustedRootAuthority -Name $rootAuthName -Certificate $certificate
$realm = Get-SPAuthenticationRealm
$fullIssuerIdentifier = $specificIssuerId + “@” + $realm
New-SPTrustedSecurityTokenIssuer -Name $tokenIssuerName -Certificate $certificate –RegisteredIssuerName $fullIssuerIdentifier –IsTrustBroker

iisreset

In the event something goes awry, the following will remove the two objects created by this script:
Get-SPTrustedRootAuthority $rootAuthName
Remove-SPTrustedRootAuthority $rootAuthName
Get-SPTrustedSecurityTokenIssuer $tokenIssuerName

Remove-SPTrustedSecurityTokenIssuer $tokenIssuerName 

Delete the cer file from the file system of the SharePoint server.
Restart IIS (the server can take up to 24 hours to recognize these objects otherwise)

Register a new Add-In in SharePoint

Open your web application in a browser.
Navigate to <site root>/_layouts/15/appregnew.aspx
Generate an App ID, generate an App Secret, give it a display name (Title), enter the app domain (the iis address used above). Leave the redirect URI blank.

Copy the App Id to a text file and save. This is used wherever Client ID is referenced later!

Create the Add-In Project in Visual Studio

Fire up Visual Studio. (I’m using 2015 community edition in this example)
Create a new project using the SharePoint Add-in template


  • Select the local SharePoint site to use for debugging, select ‘provider-hosted’, click next.
  • Select SharePoint 2013, click next.
  • Pick a web forms or MVC application type (I take the Web forms version)
  • Select ‘Use a certificate …’, browse to the location of the certificate pfx file used in step 2, enter the password for the pfx.
  • In the Issuer ID enter the value generated in the register the add-in step above (the App ID)
  • Click Finish.
  • View the App.manifest in code
  • Paste in the Issuer ID (App ID) in the ClientId value



  • Save and close. Open in design view.
  • Change the Start Page to reflect the location of the remote web application:



  • On the permissions tab, grant the add-in permissions needed (for this example I’m giving the add-in full control to the web)



  • Open the Web.config in the Web Project
  • In the System.web section, add the following key to enable meaningful error messages:



  • In the appSettings section fill in the ClientId and IssuerId with the App Id generated in AppRegNew above, ClientSigningCertificateSerialNumber: (You will need to add this key) This is the serial number of the certificate from the text file created in step 3. There should be no spaces or hyphens in the value.

<appSettings>
  <add key="ClientID" value="guid" />
  <add key="ClientSigningCertificateSerialNumber" value="serial" />
  <add key="IssuerId" value="guid" />
</appSettings>

The Office Developer Tools for Visual Studio may have added add-in setting keys for ClientSigningCertificatePath and ClientSigningCertificatePassword. These are not used in a production add-in and should be deleted. HOWEVER, the add-in project will not publish properly without them, so the trick is to publish and deploy the web project and add-in, then remove them from the published web site web.config file.

Modify the TokenHelper file

The TokenHelper.cs (or .vb) file generated by Office Developer Tools for Visual Studio needs to be modified to work with the certificate stored in the Windows Certificate Store and to retrieve it by its serial number.

  • Near the bottom of the #region private fields part of the file are declarations for ClientSigningCertificatePath, ClientSigningCertificatePassword, and ClientCertificate. Remove all three.
  • In their place, add the following line:

     private static readonly string ClientSigningCertificateSerialNumber = WebConfigurationManager.AppSettings.Get("ClientSigningCertificateSerialNumber");

  • Find the line that declares the SigningCredentials field. Replace it with the following line:

     private static readonly X509SigningCredentials SigningCredentials = GetSigningCredentials(GetCertificateFromStore());

  • Go to the #region private methods part of the file and add the following two methods:

private static X509SigningCredentials GetSigningCredentials(X509Certificate2 cert)
{
    return (cert == null) ? null
                          : new X509SigningCredentials(cert,
                                                       SecurityAlgorithms.RsaSha256Signature,
                                                       SecurityAlgorithms.Sha256Digest);
}

private static X509Certificate2 GetCertificateFromStore()
{
    if (string.IsNullOrEmpty(ClientSigningCertificateSerialNumber))
    {
        return null;
    }

    // Get the machine's personal store
    X509Certificate2 storedCert;
    X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);

    try
    {
        // Open for read-only access              
        store.Open(OpenFlags.ReadOnly);

        // Find the cert
        storedCert = store.Certificates.Find(X509FindType.FindBySerialNumber,
                                             ClientSigningCertificateSerialNumber,
                                             true)
                       .OfType<X509Certificate2>().SingleOrDefault();
    }
    finally
    {
        store.Close();
    }

    return storedCert;
}

Package the remote web application


  • In Solution Explorer, right-click the web application project (not the SharePoint Add-in project), and select Publish.
  • On the Profile tab, select New Profile on the drop-down list.
  • When prompted, give the profile an appropriate name.
  • On the Connection tab, select Web Deploy Package in the Publish method drop-down list.
  • For Package location, use any folder. To simplify later procedures, this should be an empty folder. The subfolder of the bin folder of the project is typically used.
  • For the site name, enter the name of the IIS website that will host the web application. Do not include protocol or port or slashes in the name; for example, "PayrollSite." If you want the web application to be a child of the Default Web Site, use Default Web Site/<website name>; for example, "Default Web Site/PayrollSite." (If the IIS website does not already exist, it is created when you execute the Web Deploy package in a later procedure.)
  • Click Next.
  • On the Settings tab select either Release or Debug on the Configuration drop down.
  • Click Next and then Publish. A zip file and various other files that will be used in to install the web application in a later procedure are created in the package location

To create a SharePoint Add-in package


  • Right-click the SharePoint Add-in project in your solution, and then choose Publish.
  • In the Current profile drop-down, select the profile that you created in the last procedure.
  • If a small yellow warning symbol appears next to the Edit button, click the Edit button. A form opens asking for the same information that you included in the web.config file. This information is not required since you are using the Web Deploy Package publishing method, but you cannot leave the form blank. Enter any characters in the four text boxes and click Finish.
  • Click the Package the add-in button. (Do not click Deploy your web project. This button simply repeats what you did in the final step of the last procedure.) A Package the add-in form opens.
  • In the Where is your website hosted? text box, enter the URL of the domain of the remote web application. You must include the protocol, HTTPS, and if the port that the web application will listen for HTTPS requests is not 443, then you must include the port as well; for example, https://MyServer:4444. (This is the value that Office Developer Tools for Visual Studio uses to replace the ~remoteAppUrl token in the add-in manifest for the SharePoint Add-in.)
  • In the What is the add-in's Client ID? text box, enter the client ID that was generated on the appregnew.aspx page, and which you also entered in the web.config file.
  • Click Finish. Your add-in package is created.


Wednesday, 11 May 2016

Set Up Domain Name which uses a Dynamic IP

Set Up Domain Name which uses a Dynamic IP 

The Problem

I have finally broken down and purchased a couple domain names (to cover <mydomain>.com and <mydomain>.ca) - they were quite simply too cheap not to buy. The problem is I have residential high-speed service which uses dynamic IP assignment to my modem and even though I am only using the dns/ip for development and testing of work at this point, I need the name to resolve to a web server on a Hyper-V virtual machine and to update the ip in my nameserver records whenever it changes. Static record pointing to a dynamic IP which resolves internally to a dynamic IIS site on a dynamic Hyper-v server. Wee!

The Approach

The first step is to find a free or cheap way to automate my A records. A bit of Googling turned up a couple of options. I found DynDNS to be pretty expensive for my needs (it cost more than the price of the domains at the time of writing ...) so opted for EntryDNS. The one time donation / fee is very appealing and the few reviews I checked out gave this site high praise. The whole point of this service is to manage A records and to allow dynamic update of IPs using basic url requests. So I signed up.

Next I went to the registar of my domains and dug through the documentation until I found how to assign third party name servers to my record and changed them to ns1.entrydns.net and ns2.entrydns.net ... then waited for a full day for this change to propogate (I guess there was a reason for the sale ...)

In the mean time I set up my router, modem and server (hyper-v host running Win 10 pro) to port forward requests on port 80 through 443:
Modem - dynamic external IP, internal dynamic (192.168.x.x) IP assignment. Setup port forwarding of all incoming port 80 to 443 requests to the IP of my internal sub-net Router's IP (192.168.0.1)
Router, dynamic external IP (192.168.0.1 preferred), internal static IP ranges 10.0.0.X/24  I set up port forwarding (192.168.0.1/24:80) to my Hyper-v server (10.0.0.4/24:80).

As the server has Hyper-v on Windows 10 Pro, I decided to use a NAT on that server to manage Hyper-v development environments. A simple solution provided by reading:  https://4sysops.com/archives/native-nat-in-windows-10-hyper-v-using-a-nat-virtual-switch/. and then running the following PowerShell on the Windows 10 Server (as Admin):

$serverExternalIP = "10.0.0.4"
#Internal IP of the host Windows 10 server 

$serverInternalIP = "10.0.99.1
#the 10.0.99.x part must be the same as the $IPRange below and the .x part must be 1
$IPRange = "10.0.99.0/24"
# NAT sub-domain to be used with this connection

$DestinationHyperVServer = "10.0.99.225"
# this is the IP of the Hyper-v Server with the IIS service running (and SharePoint in my case)

# Run Get-NetNat first to see if you already have a defined net service running
New-VMSwitch -Name "NAT" -SwitchType NAT -NATSubnetAddress $IPRange
New-NetIPAddress –IPAddress $serverInternalIP -PrefixLength 24 -InterfaceAlias "vEthernet (NATSwitch)"
# change the IP address of the host, Windows 10 Server to match the range with the first IP
New-NetNat –Name NAT –InternalIPInterfaceAddressPrefix $IPRange

Add-NetNatStaticMapping -NatName “NAT” -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress $DestinationHyperVServer -InternalPort 80 -ExternalPort 80
Add-NetNatExternalAddress -NatName "NAT" -IPAddress $serverExternalIP -PortStart 80 -PortEnd 443
Get-NetNatExternalAddress
Get-NetNatStaticMapping 
Get-NetNat

Next I set up a basic Hyper-V farm consisting of 3 servers (using the 2012 server r2 developer trial) using the Virtual Switch NAT created by the PowerShell as follows:

AD (dns and Active Directory Services for my two purchased domains - all settings in PowerShell to enable quick building) IP set internally to 10.0.99.2, Subnet 255.255.255.0 and Gateway set to the Host system internal NAT IP (10.0.99.1 in this case)
CLS set to use 10.0.99.3 in DNS. 

IP4 settings of DNS servers set to Google (8.8.8.8 and 8.8.4.4) with the DNS suffix of my domain appended in advanced dns settings (not sure if this is necessary but it doesn't hurt so ...)

Then I joined the server to the newly created Domain.

Left the window open to allow record adding when other servers join the domain. 

IIS (IIS/SharePoint development server)
Pretty much the same IP settings except the IP address was set to 10.0.99.225 and the Alternate DNS server ip set to 10.0.99.2 (so the internal DNS service resolves internal names). IIS initially set up to just HTTP and no bindings.

SQL (Sql Server 2012) IP in the unused 10.0.99.x range, same settings as the IIS server other than that.

Testing is pretty simple - browse to localhost and 10.0.99.225 from the IIS machine - both should produce a site. Browse to 10.0.99.225 on SQL and AD - should see the same site. From the hyper-v host, browse to 10.0.99.225, 10.0.0.4 (whatever it's internal ip address is) and whatever your external IP is (from the modem) - all three should open the same site. 

Once the nameservers have refreshed and your whois record shows them as being used, test the domain name on both the server and host level.

For SharePoint I set up AAM for each of the subsites to go to different applications at the same IP and this worked well but will not go into detail here.

Now came the fun part. I created the following PowerShell script to handle updating the dynamic IP recorded on the EntryDNS service. This uses the Doman record Tokens in a string array - I'll likely change this to read an input file of either CSV or XML delimited values.

The global logic is as follows:
Get the current public internet IP (from checkip.dyndns.com), compare to the last recorded IP address registered and if different update the EntryDNS entry and the IP record. Then set up a timed job to run the script on a regular interval.

The code:
#DNS Codes for each domain entered in EntryDNS
    $Codes = ("put the codes here")
#this is a simple text file with a single IP address inside it Change to any location you like for a temp file (C:\temp or something)
    $IpFile = "C:\IP.txt"
    
    # set to true to debug the settings. Logging is very verbose so not a good idea to leave on.
    $logging = $false
    $logpath = "C:\testlogs\"

function UpdateEntryDNS ($Code, $currentIP) {
    try {
        $req = "https://entrydns.net/records/modify/$Code"+"?ip="+"$currentIP"
        # Builds the url to update the IP address
        $r = Invoke-WebRequest $req
        if ($logging -eq $true) {
            $dt = Get-Date -Format g | foreach {$_ -replace ":", "."}
            $fn = "$logpath"+$dt+$Code+".txt"

            # if the path doesn't exist, create the directory
            if ((Test-Path($logpath)) -eq $false) {mkdir $logpath}

            # output file and populate with details
            # interval of checking should be at least 1 min apart or files get overwritten
            $Logoutput =  "$dt "+ $r.StatusCode +" " +$r.Content+" "+$req  | Out-File $fn -Force
        }
        Write-Host "Updated"
        }
        catch {
    
        }
}    
function Get-ExternalIP {
# the following parses the returned page
# changing the source will mean parsing the returned content appropriately.
    $Ip =(Invoke-WebRequest "checkip.dyndns.com").Content
    $Ip2 = $Ip.ToString() 
    $ip3 = $Ip2.Split(" ") 
    $ip4 = $ip3[5] 
    $ip5 = $ip4.replace("</body>","") 
    $curIP1 = $ip5.replace("</html>","")
    $curIP = ($curIP1.replace("`n","")).Trim()
    return $curIP
}


$currentIP = Get-ExternalIP

# if the file exists, read it - otherwise run once with null IP and create file with current IP
if (Test-Path($IpFile)) {
    $content = ([IO.File]::ReadAllText($IpFile)).Trim()
}
else {
    $content = "0.0.0.0"
}
# New IP
if ([IPAddress]$content -ne [IPAddress]$currentIP) {
    Write-Host "Updating ... "
    Write-Host "Old IP: " $content
    Write-Host "Current IP: " $currentIP
    # (Over)Write new Ip Addres to file 
    $currentIP | Out-File $IpFile -Force

    # update dns for all in list
    ForEach ($Code in $Codes) {UpdateEntryDNS $Code $currentIP}
}
# No New IP
else {
    Write-Host "Equal"
}

Running the above will create the file C:\IP.txt which contains the most current IP address, a folder located at C:\testlogs\ if logging is changed to $true with a timestamp+code named file for each update attempted - inside each is the return code and content message (200 and OK are the desired results here) as well as the combined URL used for debugging purposes.

Now that this has been created and tested, save it to a good location for custom scripts on the Windows 10 Host system (C:\powershellscripts for instance) and set up a timed event to run the script.

Open "Administrative Tools' and 'Task Scheduler' and click 'Create Task'


Give it a meaningful name

Create a trigger and set up the timing as suits your environment (no more than once every 5 minutes is recommended and remember to enable 'Stop Task if it runs for ...' to kill any hanging processes eventually. Click OK.


Next Add the Action. Action Tab -> New -> select Start a program -> Find the script above, click OK.

Click ok and ok and close all those nasty windows!

Voila! Minimal downtime when the IP changes.

Note that one still needs to add DNS entries for subsites in SharePoint (in AAM) and in DNS (EntryDNS which adds more update codes to the list) and in IIS (to some degree). Ideally a Site creation script used in this environment will do all of that in one step.

The Result

A relatively simple PowerShell script which can be reused on a timed basis to keep my IP up to date with my server.

Going Forward

Potentially use a csv / text file to manage multiple domains, sub-sites and keys to update the A record of each (in the absence of figuring a way to use a wildcard).
If other free-ish services are available, add routines for each so a simple toggle can be selected in the script for parsing of both the IP resolution and the entryDNS site.

If this helped, let me know! If it is broken, Let me know!

Tuesday, 5 April 2016

PowerShell SharePoint Backup and Restore made automatic

PowerShell SharePoint Backup and Restore

The Problem

I need an automatic method of backing up and restoring my SharePoint 2013 farm (on prem) to the same or a different farm (for creating a UAT/DEV environment refresh for instance ...)

The Approach

Use PowerShell to create a fast backup and create a fast restore.

The Result

First create a folder on one of the SharePoint servers (C:\SP_Backups in this example).

Next, the current logged in user needs to have securityadmin fixed server role on the SQL Server instance, db_owner fixed database role on all databases that are to be updated / accessed, be in the Administrators group on the server on which you are running the Windows PowerShell cmdlets ... in other words a member of the SPShellAdministrators so if this fails due to some kind of access or privvy errors - have the farm admin add the user running the script using:
Add-SPShellAdmin -UserName $domain\$user
When done the same Admin can remove the user using:
Remove-SPShellAdmin -UserName $domain\$user

Save the following to a ps1 file in the newly created folder above. Edit the path variables (highlighted below) and run it from an admin PS window (using .\<filename>.ps1)

# REQUIRES that the account running this has:
# - securityadmin fixed server role on the SQL Server instance
# - db_owner fixed database role on all databases that are to be updated.
# - Administrators group on the server on which you are running the Windows PowerShell cmdlets.
# can add by An administrator using the Add-SPShellAdmin cmdlet to grant permissions to use SharePoint 2013 cmdlets

# Add-SPShellAdmin -UserName $domain\$user
#
# load up the SharePoint snap-in if not loaded properly
if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null) {
    Add-PSSnapin "Microsoft.SharePoint.PowerShell"
}
# get some variables
$stamp = Get-Date -Format yyyy-MM-dd
# network share to copy completed backup to
$serverdir = "\\SERVER\Shared\baersland-farm-backup"
# local machine dir to create backup to
$dir = "\\SP13\SP_Backups\$stamp"
# create a new subfolder based on year-month-day
New-Item $dir -ItemType directory -Force

$ctdb = Get-SPContentDatabase
# create the restore powershell script in the newly created folder
Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Force -InputObject "# Restore Script created:  $stamp"
Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Append -InputObject "# Optional for moving to another farm -FarmCredentials domain\user -NewDatabaseServer newdbserver"
foreach ($_ in $ctdb) {
    $name = $_.Name
    $guid = $_.Id
    Write-Host "processing ... $name"
    Backup-SPFarm -Directory $dir -BackupMethod Full -Item $_.Name -Verbose
    Write-Host "backup of $name success!!"
    Out-File -FilePath $dir\1-run_me_to_restore.ps1 -Append -InputObject "Restore-SPFarm -Directory $dir -RestoreMethod Overwrite -Item $name -Verbose -Confirm $false"
}
# move from local to network share
Copy-Item -Path $dir -Destination $serverdir -Recurse -Force
Clear-Host
Write-Host "░░░░░░░░░░░░░░░░░░░░░█████████
░░███████░░░░░░░░░░███▒▒▒▒▒▒▒▒███
░░█▒▒▒▒▒█░░░░░░░███▒▒▒▒▒▒▒▒▒▒▒▒▒███
░░░█▒▒▒▒▒▒█░░░░██▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░░░░█▒▒▒▒▒█░░░██▒▒▒▒▒██▒▒▒▒▒▒██▒▒▒▒▒███
░░░░░█▒▒▒█░░░█▒▒▒▒▒▒████▒▒▒▒████▒▒▒▒▒▒██
░░░█████████████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░░░█▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒▒▒██
░██▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒██▒▒▒▒▒▒▒▒▒▒██▒▒▒▒██
██▒▒▒███████████▒▒▒▒▒██▒▒▒▒▒▒▒▒██▒▒▒▒▒██
█▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒████████▒▒▒▒▒▒▒██
██▒▒▒▒▒▒▒▒▒▒▒▒▒▒█▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░█▒▒▒███████████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██
░██▒▒▒▒▒▒▒▒▒▒████▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒█
░░████████████░░░█████████████████" -ForegroundColor Green

The lat bit is just a fun way of knowing when the script and backup have worked.

Restoring is simple. Open the date stamped folder and run the script this script created <path>/1-run_me_to_restore.ps1. If moving to another environment with different user credentials for the farm and a different database server ... recomended ... edit the file by removing the comment and adding values for the required info:
# Optional for moving to another farm -FarmCredentials domain\user -NewDatabaseServer newdbserver

Going Forward

Use parameters and can the script to be called as part of a toolset.
Set up a script to add this as a scheduled task on one of the servers.
Add error handling (try/catch with a nasty catch ascii image!)