Tag Archives: Powershell

Setting Up a vSphere Service Account for Pivotal BOSH Director using PowerCLI

BOSH Director requires a fairly powerful vCenter service account to do all of the things it does.

The list of permissions required is here, and it’s extensive.

You can always take the shortcut and make your account an Administrator of the vSphere environment, but that violates the whole “least privilege” principle and I don’t like that in production environments.

I wrote a working PowerCLI code function that will automatically create this vCenter role and add the specified user/group to it.

It greatly reduces the time to set this up.  Hope this helps someone out.

function Add-BoshVCenterAccount()
Grants the correct vSphere permissions to the specified service user/group for BOSH director to function.
This function creates a new vSphere role called PKS Administrators if it does not exist already. It then assigns the specified local or domain user/group to the role at the root vCenter server object.
Specifies the Group to assign the role to.
Specifies the User to assign the role to.
If specified, then the User or Group specified is assumed to be a domain object. Specify the AD Domain the user/group is a member of.
The resultant permission.
Connect-ViServer -Server myvcenter.domain.com
Add-BoshVCenterAccount -Domain mydomain -User user1
[string] $User,
[string] $Group,
[string] $Domain
$version = $Null
if ( (Get-Variable | Where-Object { $_.Name -ieq "global:DefaultViServer" }) -and $DefaultViServer )
$version = $defaultViServer.Version
throw ("Use Connect-ViSever first!")
# Permissions for 6.5+:
$privileges = @( `
"Manage custom attributes",
"Allocate space",
"Browse datastore",
"Low level file operations",
"Remove file",
"Update virtual machine files",
"Delete folder",
"Create folder",
"Move folder",
"Rename folder",
"Set custom attribute",
"Modify cluster",
"Assign network",
"Assign virtual machine to resource pool",
"Migrate powered off virtual machine",
"Migrate powered on virtual machine",
"Add existing disk",
"Add new disk",
"Add or remove device",
"Change CPU count",
"Change resource",
"Configure managedBy",
"Disk change tracking",
"Disk lease",
"Display connection settings",
"Extend virtual disk",
"Modify device settings",
"Raw device",
"Reload from path",
"Remove disk",
"Reset guest information",
"Set annotation",
"Swapfile placement",
"Unlock virtual machine",
"Guest Operation Program Execution",
"Guest Operation Modifications",
"Guest Operation Queries",
"Answer question",
"Configure CD media",
"Console interaction",
"Defragment all disks",
"Device connection",
"Guest operating system management by VIX API",
"Power Off",
"Power On",
"VMware Tools install",
"Create from existing",
"Create new",
"Allow disk access",
"Allow read-only disk access",
"Allow virtual machine download",
"Allow virtual machine files upload",
"Clone template",
"Clone virtual machine",
"Deploy template",
"Mark as template",
"Mark as virtual machine",
"Modify customization specification",
"Promote disks",
"Read customization specifications",
"Create snapshot",
"Remove Snapshot",
"Rename Snapshot",
"Revert to snapshot",
"vApp application configuration"
if ( $version -ilike "6.0*" )
# Version 6.0 permissions:
$privileges = $privileges | Where-Object { $_ -inotmatch '^(Create|Edit|Delete)Tag$' }
$privileges += "Create Inventory Service Tag"
$privileges += "Edit Inventory Service Tag"
$privileges += "Delete Inventory Service Tag"
$role = Get-ViRole | Where-Object { $_.Name -ieq "PKS Administrators" }
if ( !$role )
$role = New-VIRole Name "PKS Administrators" Privilege $privileges
$principalParam = @{}
$idFieldName = "Name"
if ( $Domain )
$principalParam.Add("Domain", $Domain)
$idFieldName = "Id"
if ( $PSCmdlet.ParameterSetName -ieq "user" )
$principalParam.Add($idFieldName, $User)
$principalParam.Add("User", $true)
$principalParam.Add($idFieldName, $Group)
$principalParam.Add("Group", $true)
$principal = Get-VIAccount @principalParam
if ( $PSCmdlet.ShouldProcess($DefaultViServer.Name, "Add permission to root Vcenter for domain account $($principal.Name) and role PKS Administrators") )
New-VIPermission Entity "Datacenters" Principal $principal Role $role


Using the Puppet CA API From Windows

Puppet Enterprise exposes a number of RESTful APIs that can be used to help automate the solution and integrate it with other things. One need I’ve run into is the need to revoke and remove certificates from Puppet nodes in an automated fashion. My previous approach involved using SSH to connect to the Puppet Master server and run the puppet cert clean command, but I’m not a huge fan of that. With some effort, I found out how to talk to the API using Postman and PowerShell in a Windows environment. Postman was good for initial testing of the API, while I use PowerShell to fully automate solutions. I’ve outlined a step-by-step on how to set this up below:


The base URI for the puppet CA API is:

https://*puppet master server FQDN*:8140/puppet-ca/v1

The default port is 8140, which is configurable.


Authorization and authentication were the most difficult parts for me to figure out. Unlike the other API endpoints in Puppet, you don’t use the normal token method. The CA API uses certificate authentication and authorization is granted based on the Subject Name of the certificate your client presents to the Puppet server. By default, the ONLY machine allowed to talk to the endpoint is your Puppet Master server itself, so without modification you can’t do much with the API.

You can change the authorization rules to allow other machines to connect. You can see the configuration for this in the /etc/puppetlabs/puppetserver/conf.d/auth.conf:

"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate/",
"query-params": {},
"type": "path"
"name": "puppetlabs certificate",
"sort-order": 500
"allow": [
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
"name": "puppetlabs certificate status",
"sort-order": 500
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate_revocation_list/ca",
"query-params": {},
"type": "path"
"name": "puppetlabs crl",
"sort-order": 500
"allow-unauthenticated": true,
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_request",
"query-params": {},
"type": "path"
"name": "puppetlabs csr",
"sort-order": 500

You’ll see an array of rules defined in this file, each one granting access to particular API endpoints. In this case, I’m most concerned with the certificate endpoints shown above. (For details on the layout of this file, see Puppet’s Docs here)

The endpoint rules that specify “allow-unauthenticated” are freely-accessible without authentication, so most of this article doesn’t apply to them. Just make a call from Postman or Curl like normal.

However, the certificate_status endpoint has an “allow” property, which lists all of the nodes that are allowed to access the endpoint. By default, it appears the name of your Puppet Master server appears here.

Normally, you could probably add entries to this list, restart your Puppet Master services, and go. The issue is this file is actually managed by Puppet, and your changes would be overwritten the next time the Puppet agent runs.

This setting is actually governed by the puppet_enterprise::profile::certificate_authority::client_whitelist setting. This can be set a couple of ways. The first way is to log into the Puppet Master GUI and do the following:

  1. Go to Inventory and select your Puppet Master server
  2. Select the “Groups” tab and click the PE Certificate Authority Group
  3. Click the “Classes” tab
  4. Set the client_whitelist parameter under puppet_enterprise::profile::certificate_authority

certificate_authorityNormally, this would work, but when the Puppet agent runs you might get the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Class[Puppet_enterprise::Profile::Master] is already declared; cannot redeclare on node

The workaround I found in a Q/A article suggested to just add the setting to your common.yaml and have Hiera set the setting instead. This worked well for me. My common.yaml file looks like this:

# Allows the listed machines to communicate with the puppet-ca API:
– server1.mydomain.com
– server2.mydomain.com

Once this was pushed to the Puppet Master server, I did a Puppet agent run using puppet agent -t from the server and it applied the settings. Checking auth.conf again, I now see this:

"allow": [
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"

Now that my servers are authorized to access the API, I can make calls using a client certificate to authenticate to the API.


The next section shows you how to setup Postman and PowerShell to authenticate to the API. If you setup your authorization correctly as shown above, you should be able to hit the APIs.

Using Postman

To use Client Cert authentication to the Puppet API, you can setup Postman using the following method

Import the cert into Postman:

  1. Click Settings in Postman
  2. Go to Certificates
  3. Click the “Add Certificate link”
  4. Add the cert using the following settings
    • Host – Specify the FQDN of the host you want to present the cert to. Don’t specify any of the URI path, just the FQDN and port.
    • CRT File – Use the PEM file in the certs directory
    • KEY File – Use the PEM file in the private_keys directory
    • NO passphrase


Once that is done, you can issue a GET command to a URI like this and get a response:


The “key” portion of the URI is required, but the word “key” is arbitrary. I think you can pretty much type anything you want there.

This yields a response much like the following:


If you get a “Forbidden” error, you either have the URI slightly wrong or you don’t have the authorization correct. The array of names in the “allow” section of the API rule MUST match the Subject Name of the certificate.

Using PowerShell

To get this to work with PowerShell, you have to export your Puppet certs as a PFX and reference them in a Invoke-RestMethod call.

To create a PFX from the certs, do the following:

  1. Install Openssl
      • If you have Git for Windows installed, you already have this. Just change to c:\program files\Git\usr\bin
  2. Run the following
C:\Program Files\Git\usr\bin\openssl.exe pkcs12 -export -out "c:\temp\server1.domain.com.pfx" -inkey "C:\ProgramData\PuppetLabs\puppet\etc\ssl\private_keys\server1.domain.com.pem" -in "C:\ProgramData\PuppetLabs\puppet\etc\ssl\certs\server1.domain.com.pem"

Don’t specify a export password.

Once that is done, call the following Cmdlet:

Invoke-RestMethod -Uri "https://puppetmaster.domain.com:8140/puppet-ca/v1/certificate_statuses/key" -Certificate (Get-PfxCertificate -FilePath C:\temp\server1.domain.com.pfx) -Headers @{"Content-Type" = "application/json" }

Viola! That’s it.


Powershell Module for Logging to a File

Despite out best efforts as coders, automated processes sometimes fail.  One of the principle ways to troubleshoot such processes is to log data to a file so you can follow what happened after-the-fact.  I have a TON of scripts that have to do this, so it made sense to cobble together some functions that make doing this easier.

To this end, I’ve written a script module called bsti.logging.  It features the following functions:


Once you import the module, you call New-LogFile to setup a new file to write to.  You can specify options to append the weekday to the file or a more specific timestamp (e.g. MyLog_mmddyyyyhhmmss.log) to the log file.  For timestamped log files, you can also setup retention so old log files get automatically deleted after a period of time or after so many accumulate.


I have three basic types of ways to handle log file naming that you need to be clear on to get good use out of the module:

1) Standard – The log file path you pass is will be unchanged by the function.  The purging parameters are ignored, you must use -Append or it will be overwritten if it exists already.

2) Circular – The log file will have _weekday appended to the file name before the extension.  If you pass in C:\temp\log\MyLogFile.log for example, you get:
When you call New-LogFile with circular naming and the *same* log file path again that same day, it will automatically be appended to.  When the next Monday rolls by, it *automatically* overwrites it.

This scheme is good if you call the script that writes to the log file frequently, don’t want to manage a large number of log files, and don’t need more than 7 days of log file history.

3) DateStamped – This appends a detailed datestamp to the log file name before the extension.  In the example above, you get:
MyLogFile_03292015200500.log (Assuming 3/29/2015 10:05 PM)
This means every time you call New-LogFile (Assuming you wait at least 1 second!), you get a unique log file.  Append is essentially ignored.
The PurgeAfter and KeepNumberOfFIles, if specified, will cause the New-LogFile function to call Remove-LogFiles automatically and keep your log files trimmed as you specify.  If you specify both PurgeAfter AND KeepNumberOfFiles, both thresholds are observed (meaning the file needs to be older than what you specified with PurgeAfter AND you have to have KeepNumberOfFiles remaining).

This scheme is good if you need a specified history of log files and want individual log files for each run of your process.  The automatic cleanup is a bonus.

Once you’ve setup your new log file, you call the following functions to write to it.  These functions also echo to the console, so you can replace any calls to Write-Host with these functions and get messages to your console and to a log file:



Makes things pretty simple.

As with all of my modules and module functions, I heavily document them using Powershell comments-based help.  Just try:

Get-Help New-LogFile -Full

This module does depend on my bsti.conversion module, so if you use this module as a whole you need both modules.  I posted about that module here.

Here is a link to the new bsti.logging module.
Here is a link to the bsti.conversion module.

TimeSpan Conversion Function and Module

I have a ton of Powershell code I’ve written over the last 6 years or so that I’m in the process of cleaning up and looking to share.  I plan on re-presenting them as a set of scripts and script modules I’ll be featuring in a series of blog posts coming in the near future.

I’ll start simply.  I have a new script module called bsti.conversion that has a single function:  ConvertTo-TimeSpan

I always thought it was neat that you could type in 1kb and Powershell would convert that value to 1024.  You can also use mb (megabytes), tb (terabytes), and pb (petabytes).  I don’t see that eb (exabytes) works.  In any case, I always wished I could do the same with time units like this:

12m (minutes) or 24h (hours) or 7d (days)

The ConvertTo-TimeSpan function I’ve included in this module does just that.

What I use this for is I have functions and scripts I like to write that require the user to pass in an easy time unit value.

This functionality can also be achieved by Timespan conversion like so:

[Timespan](“1.00:00:00”)  # 1 day
[Timespan](“1.6:00:00”)  # 1 day, 6 hours
[Timespan](“1:22:00”)  # 1 hour, 22 minutes

The conversion function is a little less precise, but a bit more human-readable, which is important to me since most of my users are not .NET programmers and don’t understand the format of a timespan object right offhand.

The function in this module supports both formats:


The module files can be downloaded here.
Once downloaded, extract the folder to the C:\windows\system32\windowspowershell\v1.0\modules directory.  The final structure should look like this:

Then just use the Import-Module bsti.conversion command as shown above.

Not bad for a start, hope you enjoy.

UPDATE:  I’m adding my stuff to GitHub.  Bear with me while I come up-to-speed on how to use it.  Find this module project here:


Automatically Transcripting all Powershell Sessions

If you love Windows Powershell as much as I do, you probably find yourself using it to complete day-to-day management tasks in addition to scripting and automation.

More and more hardware vendors, like NetApp and VmWare, provide very robust Powershell toolsets.  Because I’m such a command line guy, and these Powershell libraries are so powerful, I perform nearly all of my management tasks from Powershell.  I find tasks like provisioning and destroying storage, creating clones, managing snapshots, and getting data from virtual machines much easier via the command line in many cases.

Because this is a day-to-day thing to me, it’s useful to have all of the code I type in and the output I get back automatically logged to a file for future perusal.  I liken it to the administrators who manage a lot via SSH, and setup Putty to log all sessions to a file.  This allows me to look at past actions, remember how I did stuff, or see what I did wrong if something got messed up.  It can also helpful for auditing if you need to track who did what using Powershell.

Below is a procedure I use on all management stations I use Powershell from.  This procedure automatically logs all activity from the command line to a file.

By default, it creates a new transcript file in the C:\users\myusername\LogFiles\Powershell\computername directory.

You can override this if you want to go to a central location, by calling the Set-TranscriptFilePath function.

For instance, to transcript everything to a central file share:

Set-TranscriptFilePath -path "\\myserver\LogFiles\Powershell\server1"

It creates one new transcript file per session you launch, so you won’t have multiple sessions writing over the same file.

To set this up, create a new file called profile.ps1 in one of the following directories:

Apply to just the current user:
C:\users\myusername\documents\WindowsPowershell\profile.ps1   only

Or to apply to all users on the computer:

Copy the following script text to the profile script you created:

  Windows Powershell console profile script. This script is generic enough to be run from any machine. It sets up console logging to a network share for servers and locally for workstations. 
  NOTE: This will not transcript in Powershell ISE! Transcripting in ISE is supported in the current (early) version of Windows Management Framework 5.0 however. 

$script:TranscriptFileKey = "HKLM:\SOFTWARE\PowershellManagement\Powershell"


function Get-TranscriptFilePath()
    This function returns the location where Powershell session transcript log files go. 


  $transcriptFilePath = ""

  # User can override log path in registry:
  if ( Test-Path -Path $script:TranscriptFileKey )
    $regKey = Get-Item -Path $script:TranscriptFileKey
    if ( $regKey.Property -icontains "RootTranscriptPath" )
      $transcriptFilePath = (Get-ItemProperty -Path $script:TranscriptFileKey -Name "RootTranscriptPath").RootTranscriptPath

  if ( !$transcriptFilePath )
    # Station is a workstation, use local path:
    $transcriptFilePath = Join-Path $Env:USERPROFILE -ChildPath "LogFiles\Powershell\$($Env:ComputerName)"

  # Create the log file path if it does not exist:
  if ( !(Test-Path -Path $transcriptFilePath) )
    New-Item -ItemType directory -Path $transcriptFilePath -Force | Out-Null


function New-TranscriptFilePath()
  Join-Path -Path (Get-TranscriptFilePath) -ChildPath ("Powershell {0:MM dd yyyy hh mm ss} {1:00000}.log" -f (Get-Date),$PID)

function Test-Administrator
  $user = [Security.Principal.WindowsIdentity]::GetCurrent()
  (New-Object Security.Principal.WindowsPrincipal $user).IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator)

function Set-TranscriptFilePath()
    This function sets the transcript file path from the default. This affects all future Powershell sessions. 
    .PARAMETER Path 
    Specifies the new path where future transcript files get saved to. This will remain the path until it is changed. Set this to "" to reset to the default path: C:\users\username\LogFiles\Powershell\computername 

    [string] $Path

  if ( !(Test-Administrator) )
    throw ("You must launch this console as an administrator to execute this function!")

  if ( $Path )
    if ( !(Test-Path -Path $Path) )
      # Create the registry path:
      New-Item -ItemType Directory -Path $script:TranscriptFileKey -Force -ErrorAction Stop | Out-Null

  Set-ItemProperty -Path $script:TranscriptFileKey -Name RootTranscriptPath -Value $Path -ErrorAction Stop | Out-Null

  if ( !$Path )
    $Path = Join-Path $Env:USERPROFILE -ChildPath "LogFiles\Powershell\$($Env:ComputerName)"

  Write-Host ("Future transcript files will be saved to the following directory: $Path") -ForegroundColor Green

Once complete, every new Powershell console session you launch will be automatically transcripted!

Note: As of Powershell 4.0, you can’t transcript from the Powershell session that gets launched from Powershell ISE.  This is due to a difference in the console.  However, I have noticed it works OK in the preview of Windows Management Framework 5.0, so i suspect support for this is coming soon.

I’ve uploaded the script here if you’d rather not cut-and-paste.