Category Archives: Automation

Gitlab – Webhooks stop working – Internal Error 500

After my latest update to Gitlab (to version 10.7.3 ee), I found my webhooks on several of my repositories stopped working.  When I’d manually test them from the Gitlab Interface, I got a generic 500 error:

Gitlab 500

I figured this had something to do with the recent update, so I dug into the logs.  I found the issue in /var/log/gitlab/gitlab-rails/production.log:

Gitlab::HTTP::BlockedUrlError (URL ‘https://myserver.domain.com:8170/code-manager/v1/webhook?type=github&token=blahblahblahblah’ is blocked: Requests to the local network
are not allowed):
lib/gitlab/proxy_http_connection_adapter.rb:17:in `rescue in connection’
lib/gitlab/proxy_http_connection_adapter.rb:14:in `connection’
app/services/web_hook_service.rb:73:in `make_request’
app/services/web_hook_service.rb:26:in `execute’
app/models/hooks/web_hook.rb:10:in `execute’
app/services/test_hooks/base_service.rb:22:in `block in execute’
app/services/test_hooks/base_service.rb:19:in `catch’
app/services/test_hooks/base_service.rb:19:in `execute’
app/controllers/projects/hooks_controller.rb:41:in `test’
lib/gitlab/i18n.rb:50:in `with_locale’
lib/gitlab/i18n.rb:56:in `with_user_locale’
app/controllers/application_controller.rb:334:in `set_locale’
lib/gitlab/middleware/multipart.rb:95:in `call’
lib/gitlab/request_profiler/middleware.rb:14:in `call’
ee/lib/gitlab/jira/middleware.rb:15:in `call’
lib/gitlab/middleware/go.rb:17:in `call’
lib/gitlab/etag_caching/middleware.rb:11:in `call’
lib/gitlab/middleware/read_only/controller.rb:28:in `call’
lib/gitlab/middleware/read_only.rb:16:in `call’
lib/gitlab/request_context.rb:18:in `call’
lib/gitlab/metrics/requests_rack_middleware.rb:27:in `call’
lib/gitlab/middleware/release_env.rb:10:in `call’

The whole “requests to the local network are not allowed” thing was new to me, so I found this:

https://docs.gitlab.com/ee/user/project/integrations/webhooks.html

The comments at the bottom showed me the way.  THere is a new setting you have to enable:

  1. Log into Gitlab
  2. Go to the admin area
  3. Go to Settings
  4. Go to Outbound Requests
  5. Click the “Allow requests to the local network from hooks and services” button.
  6. Save the changes

Allow Outbound

 

Viola!  It works now.

Setting Up a vSphere Service Account for Pivotal BOSH Director using PowerCLI

BOSH Director requires a fairly powerful vCenter service account to do all of the things it does.

The list of permissions required is here, and it’s extensive.

You can always take the shortcut and make your account an Administrator of the vSphere environment, but that violates the whole “least privilege” principle and I don’t like that in production environments.

I wrote a working PowerCLI code function that will automatically create this vCenter role and add the specified user/group to it.

It greatly reduces the time to set this up.  Hope this helps someone out.

function Add-BoshVCenterAccount()
{
<#
.SYNOPSIS
Grants the correct vSphere permissions to the specified service user/group for BOSH director to function.
.DESCRIPTION
This function creates a new vSphere role called PKS Administrators if it does not exist already. It then assigns the specified local or domain user/group to the role at the root vCenter server object.
.PARAMETER Group
Specifies the Group to assign the role to.
.PARAMETER User
Specifies the User to assign the role to.
.PARAMETER Domain
If specified, then the User or Group specified is assumed to be a domain object. Specify the AD Domain the user/group is a member of.
.OUTPUTS
[VMware.VimAutomation.ViCore.Impl.V1.PermissionManagement.PermissionImpl]
The resultant permission.
.LINK
https://docs.pivotal.io/pivotalcf/2-0/customizing/vsphere-service-account.html
.EXAMPLE
Connect-ViServer -Server myvcenter.domain.com
Add-BoshVCenterAccount -Domain mydomain -User user1
#>
[CmdletBinding(SupportsShouldProcess,DefaultParameterSetName="user")]
param
(
[Parameter(Mandatory,ParameterSetName="user")]
[string] $User,
[Parameter(Mandatory,ParameterSetName="group")]
[string] $Group,
[string] $Domain
)
$version = $Null
if ( (Get-Variable | Where-Object { $_.Name -ieq "global:DefaultViServer" }) -and $DefaultViServer )
{
$version = $defaultViServer.Version
}
else
{
throw ("Use Connect-ViSever first!")
}
# Permissions for 6.5+:
$privileges = @( `
"Manage custom attributes",
"Allocate space",
"Browse datastore",
"Low level file operations",
"Remove file",
"Update virtual machine files",
"Delete folder",
"Create folder",
"Move folder",
"Rename folder",
"Set custom attribute",
"Modify cluster",
"CreateTag",
"EditTag",
"DeleteTag",
"Assign network",
"Assign virtual machine to resource pool",
"Migrate powered off virtual machine",
"Migrate powered on virtual machine",
"Add existing disk",
"Add new disk",
"Add or remove device",
"Advanced",
"Change CPU count",
"Change resource",
"Configure managedBy",
"Disk change tracking",
"Disk lease",
"Display connection settings",
"Extend virtual disk",
"Memory",
"Modify device settings",
"Raw device",
"Reload from path",
"Remove disk",
"Rename",
"Reset guest information",
"Set annotation",
"Settings",
"Swapfile placement",
"Unlock virtual machine",
"Guest Operation Program Execution",
"Guest Operation Modifications",
"Guest Operation Queries",
"Answer question",
"Configure CD media",
"Console interaction",
"Defragment all disks",
"Device connection",
"Guest operating system management by VIX API",
"Power Off",
"Power On",
"Reset",
"Suspend",
"VMware Tools install",
"Create from existing",
"Create new",
"Move",
"Register",
"Remove",
"Unregister",
"Allow disk access",
"Allow read-only disk access",
"Allow virtual machine download",
"Allow virtual machine files upload",
"Clone template",
"Clone virtual machine",
"Customize",
"Deploy template",
"Mark as template",
"Mark as virtual machine",
"Modify customization specification",
"Promote disks",
"Read customization specifications",
"Create snapshot",
"Remove Snapshot",
"Rename Snapshot",
"Revert to snapshot",
"Import",
"vApp application configuration"
)
if ( $version -ilike "6.0*" )
{
# Version 6.0 permissions:
$privileges = $privileges | Where-Object { $_ -inotmatch '^(Create|Edit|Delete)Tag$' }
$privileges += "Create Inventory Service Tag"
$privileges += "Edit Inventory Service Tag"
$privileges += "Delete Inventory Service Tag"
}
$role = Get-ViRole | Where-Object { $_.Name -ieq "PKS Administrators" }
if ( !$role )
{
$role = New-VIRole Name "PKS Administrators" Privilege $privileges
}
$principalParam = @{}
$idFieldName = "Name"
if ( $Domain )
{
$principalParam.Add("Domain", $Domain)
$idFieldName = "Id"
}
if ( $PSCmdlet.ParameterSetName -ieq "user" )
{
$principalParam.Add($idFieldName, $User)
$principalParam.Add("User", $true)
}
else
{
$principalParam.Add($idFieldName, $Group)
$principalParam.Add("Group", $true)
}
$principal = Get-VIAccount @principalParam
if ( $PSCmdlet.ShouldProcess($DefaultViServer.Name, "Add permission to root Vcenter for domain account $($principal.Name) and role PKS Administrators") )
{
New-VIPermission Entity "Datacenters" Principal $principal Role $role
}
}

 

Pivotal BOSH Director Setup Error – Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I was trying to install Pivotal Kubernetes Services on vSphere.  I setup the inital Operations Manger .ova appliance without issue.  Since I was deploying on vSphere, I needed to configure the BOSH Director installation through the vSphere tile next.  I ran through the configuration and tried to deploy once.. and it failed.  I tried again, and was dead-stopped at the above error over and over again.  I believe this came up because I deleted the BOSH/0 VM and tried to have the installer run again.

When in this state, it continually fails with the following error:

Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I had no idea what that meant, so I found this on the tech support site:
https://discuss.pivotal.io/hc/en-us/articles/115000488247-OpsManager-Install-Updates-error-Could-not-find-VM-for-stemcell-xxxxx-

Same error, but I didn’t have BOSH director even setup yet so it didn’t apply.

The full log readout is below:

{“type”: “step_started”, “id”: “bosh_product.deploying”}
===== 2018-05-10 20:29:13 UTC Running “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”
Deployment manifest: ‘/var/tempest/workspaces/default/deployments/bosh.yml’
Deployment state: ‘/var/tempest/workspaces/default/deployments/bosh-state.json’

Started validating
Validating release ‘bosh’… Finished (00:00:00)
Validating release ‘bosh-vsphere-cpi’… Finished (00:00:00)
Validating release ‘uaa’… Finished (00:00:01)
Validating release ‘credhub’… Finished (00:00:00)
Validating release ‘bosh-system-metrics-server’… Finished (00:00:01)
Validating release ‘os-conf’… Finished (00:00:00)
Validating release ‘backup-and-restore-sdk’… Finished (00:00:04)
Validating cpi release… Finished (00:00:00)
Validating deployment manifest… Finished (00:00:00)
Validating stemcell… Finished (00:00:03)
Finished validating (00:00:12)

Started installing CPI
Compiling package ‘ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254’… Finished (00:00:00)
Compiling package ‘iso9660wrap/82cd03afdce1985db8c9d7dba5e5200bcc6b5aa8’… Finished (00:00:00)
Compiling package ‘vsphere_cpi/3049e51ead9d72268c1f6dfb5b471cbc7e2d6816’… Finished (00:00:00)
Installing packages… Finished (00:00:00)
Rendering job templates… Finished (00:00:01)
Installing job ‘vsphere_cpi’… Finished (00:00:00)
Finished installing CPI (00:00:02)

Starting registry… Finished (00:00:00)
Uploading stemcell ‘bosh-vsphere-esxi-ubuntu-trusty-go_agent/3541.12’… Skipped [Stemcell already uploaded] (00:00:00)

Started deploying
Creating VM for instance ‘bosh/0’ from stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’… Failed (00:00:02)
Failed deploying (00:00:02)

Stopping registry… Finished (00:00:00)
Cleaning up rendered CPI jobs… Finished (00:00:00)

Deploying:
Creating instance ‘bosh/0’:
Creating VM:
Creating vm with stemcell cid ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’:
CPI ‘create_vm’ method responded with error: CmdError{“type”:”Unknown”,”message”:”Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29‘”,”ok_to_retry”:false}

Exit code 1
===== 2018-05-10 20:29:31 UTC Finished “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”; Duration: 18s; Exit Status: 1
{“type”: “step_finished”, “id”: “bosh_product.deploying”}
Exited with 1.

I did end up resolving this by deleting the bosh-state.json file. It apparently held some erroneous setup info about the stem cells that was causing the setup process to try and use a stemcell it had not yet downloaded.

I was able to SSH into the PKS Operations Manager VM and run this to fix it:

sudo rm /var/tempest/workspaces/default/deployments/bosh-state.json

Then, I was able to re-run the deployment with success.

Installing CloudFoundry User Account and Authentication Client (cf-uaac) on Windows

I’m doing some playing round with Pivotal CloudFoundry and Kubernetes and ran into an issue where during the setup I needed to use their cf-uaac tool (written in Ruby) to complete the setup and manage authentication to the system.

There are a lot of instructions out there on how to do this on Linux and pretty much all of them assume you have an Internet connection. I found not only can you install this on Windows, but you can do so on a machine that does not have Internet access.

Below, I detail how to install cf-uaac on Ubuntu Linux and Windows both with and without an Internet connection.

Prerequisites for Either Installation Method

Whether or not you have Internet access on your target machine, you need to follow these steps to setup your machine to leverage the Ruby gems.

For Linux
# Build-essential is a prerequisite for a lot of Ruby gems.
apt install -y build-essential ruby ruby-dev
For Windows
  • Download ruby (with the devkit): https://rubyinstaller.org/downloads
  • Install MSYS2
    • The devkit installer will do this for you if your machine has Internet access.
    • Otherwise, the installer will run with errors and you have to manually install it afterwards from here.
  • Make sure c:\Rubyxx-x64\bin is in your PATH environment variable (where xx is the current Ruby version)

Installing cf-uaac From the Internet

This is pretty easy and detailed in a lot of other places on the Internet. For brevity, I included quick instructions here:

For Either Windows or Linux
gem install cf-uaac

Installing cf-uaac Without a Direct Internet Connection

This method assumes you have a workstation that has Internet access from which you can download the gems. Then, you can copy them to the target machine that you need to run uaac from.

CF-UAAC has a list of required gems (as of this writing):

rack-1.6.9.gem
highline-1.6.21.gem
cookiejar-0.3.3.gem
addressable-2.5.2.gem
launchy-2.4.3.gem
eventmachine-1.2.5.gem
em-http-request-1.1.5.gem
httpclient-2.8.3.gem
cf-uaac-4.1.0.gem
json_pure-1.8.6.gem
public_suffix-3.0.2.gem
em-socksify-0.3.2.gem
multi_json-1.12.2.gem
cf-uaa-lib-3.13.0.gem
http_parser.rb-0.6.0.gem

Note that cf-uaac doesn’t require (moreover, doesn’t allow) the latest versions of all of these plugins. You need to make sure you observe the version requirements as listed. For instance, the runtime dependencies for cf-uaac are currently:

uaac-requirements

You need em-http-request version >= 1.1.2 and < 1.2. For more info on pessimistic versioning and constraints in Ruby, see this article.

Download each gem by visiting its page on Rubygems.org and clicking the “Download” link on the page.

Once you have each gem (and each gem’s dependencies) downloaded, you can move the .gem files you downloaded to somewhere reachable by your target machine.

Installing On Linux
# Install a single gem:
gem install --local /tmp/mygem.gem

# Or install a series of gems from a directory:
for file in /tmp/PKS/*.gem; do gem install --local "$file"; done
Installing On Windows
# Install a single gem:
gem install --local c:\temp\mygem.gem

# Install a series of gems from a directory:
Get-Item -Path "c:\temp\PKS\*.gem" | Sort-Object -Property Name | Foreach-Object { gem install --local "$($_.FullName)" }

Once these steps are complete, the uaac binary should be added to the Ruby/bin (Windows) or /usr/loca/bin (Linux) path and can be executed by typing uaac from your console (PowerShell or Bash).

Most issues I had getting this working were because the prerequisites weren’t present. Make sure build-essential, ruby and ruby-dev are installed on Linux machines and that Ruby with the devkit and MSYS2 is installed on Windows machines.

With all of this done, I was able to manage my PKS UAA component from the CLI on my Windows and Linux machines.

Using the Puppet CA API From Windows

Puppet Enterprise exposes a number of RESTful APIs that can be used to help automate the solution and integrate it with other things. One need I’ve run into is the need to revoke and remove certificates from Puppet nodes in an automated fashion. My previous approach involved using SSH to connect to the Puppet Master server and run the puppet cert clean command, but I’m not a huge fan of that. With some effort, I found out how to talk to the API using Postman and PowerShell in a Windows environment. Postman was good for initial testing of the API, while I use PowerShell to fully automate solutions. I’ve outlined a step-by-step on how to set this up below:

Basics

The base URI for the puppet CA API is:

https://*puppet master server FQDN*:8140/puppet-ca/v1

The default port is 8140, which is configurable.

Authorization

Authorization and authentication were the most difficult parts for me to figure out. Unlike the other API endpoints in Puppet, you don’t use the normal token method. The CA API uses certificate authentication and authorization is granted based on the Subject Name of the certificate your client presents to the Puppet server. By default, the ONLY machine allowed to talk to the endpoint is your Puppet Master server itself, so without modification you can’t do much with the API.

You can change the authorization rules to allow other machines to connect. You can see the configuration for this in the /etc/puppetlabs/puppetserver/conf.d/auth.conf:

{
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate/",
"query-params": {},
"type": "path"
},
"name": "puppetlabs certificate",
"sort-order": 500
},
{
"allow": [
"puppetmaster.domain.com"
],
"match-request": {
"method": [
"get",
"put",
"delete"
],
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
},
"name": "puppetlabs certificate status",
"sort-order": 500
},
{
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate_revocation_list/ca",
"query-params": {},
"type": "path"
},
"name": "puppetlabs crl",
"sort-order": 500
},
{
"allow-unauthenticated": true,
"match-request": {
"method": [
"get",
"put"
],
"path": "/puppet-ca/v1/certificate_request",
"query-params": {},
"type": "path"
},
"name": "puppetlabs csr",
"sort-order": 500
}

You’ll see an array of rules defined in this file, each one granting access to particular API endpoints. In this case, I’m most concerned with the certificate endpoints shown above. (For details on the layout of this file, see Puppet’s Docs here)

The endpoint rules that specify “allow-unauthenticated” are freely-accessible without authentication, so most of this article doesn’t apply to them. Just make a call from Postman or Curl like normal.

However, the certificate_status endpoint has an “allow” property, which lists all of the nodes that are allowed to access the endpoint. By default, it appears the name of your Puppet Master server appears here.

Normally, you could probably add entries to this list, restart your Puppet Master services, and go. The issue is this file is actually managed by Puppet, and your changes would be overwritten the next time the Puppet agent runs.

This setting is actually governed by the puppet_enterprise::profile::certificate_authority::client_whitelist setting. This can be set a couple of ways. The first way is to log into the Puppet Master GUI and do the following:

  1. Go to Inventory and select your Puppet Master server
  2. Select the “Groups” tab and click the PE Certificate Authority Group
  3. Click the “Classes” tab
  4. Set the client_whitelist parameter under puppet_enterprise::profile::certificate_authority

certificate_authorityNormally, this would work, but when the Puppet agent runs you might get the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Class[Puppet_enterprise::Profile::Master] is already declared; cannot redeclare on node

The workaround I found in a Q/A article suggested to just add the setting to your common.yaml and have Hiera set the setting instead. This worked well for me. My common.yaml file looks like this:

# Allows the listed machines to communicate with the puppet-ca API:
puppet_enterprise::profile::certificate_authority::client_whitelist:
– server1.mydomain.com
– server2.mydomain.com

Once this was pushed to the Puppet Master server, I did a Puppet agent run using puppet agent -t from the server and it applied the settings. Checking auth.conf again, I now see this:

{
"allow": [
"puppetmaster.domain.com",
"server1.domain.com",
"server2.domain.com"
],
"match-request": {
"method": [
"get",
"put",
"delete"
],
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
},

Now that my servers are authorized to access the API, I can make calls using a client certificate to authenticate to the API.

Authentication

The next section shows you how to setup Postman and PowerShell to authenticate to the API. If you setup your authorization correctly as shown above, you should be able to hit the APIs.

Using Postman

To use Client Cert authentication to the Puppet API, you can setup Postman using the following method

Import the cert into Postman:

  1. Click Settings in Postman
  2. Go to Certificates
  3. Click the “Add Certificate link”
  4. Add the cert using the following settings
    • Host – Specify the FQDN of the host you want to present the cert to. Don’t specify any of the URI path, just the FQDN and port.
    • CRT File – Use the PEM file in the certs directory
    • KEY File – Use the PEM file in the private_keys directory
    • NO passphrase

Postman_client_cert

Once that is done, you can issue a GET command to a URI like this and get a response:

https://puppetmasterserver.domain.com:8140/puppet-ca/v1/certificate_statuses/key

The “key” portion of the URI is required, but the word “key” is arbitrary. I think you can pretty much type anything you want there.

This yields a response much like the following:

cert_statuses

If you get a “Forbidden” error, you either have the URI slightly wrong or you don’t have the authorization correct. The array of names in the “allow” section of the API rule MUST match the Subject Name of the certificate.

Using PowerShell

To get this to work with PowerShell, you have to export your Puppet certs as a PFX and reference them in a Invoke-RestMethod call.

To create a PFX from the certs, do the following:

  1. Install Openssl
      • If you have Git for Windows installed, you already have this. Just change to c:\program files\Git\usr\bin
  2. Run the following
C:\Program Files\Git\usr\bin\openssl.exe pkcs12 -export -out "c:\temp\server1.domain.com.pfx" -inkey "C:\ProgramData\PuppetLabs\puppet\etc\ssl\private_keys\server1.domain.com.pem" -in "C:\ProgramData\PuppetLabs\puppet\etc\ssl\certs\server1.domain.com.pem"

Don’t specify a export password.

Once that is done, call the following Cmdlet:

Invoke-RestMethod -Uri "https://puppetmaster.domain.com:8140/puppet-ca/v1/certificate_statuses/key" -Certificate (Get-PfxCertificate -FilePath C:\temp\server1.domain.com.pfx) -Headers @{"Content-Type" = "application/json" }

Viola! That’s it.

References

Puppet File Sync Not Working – LOCK_FAILURE

I had a recent issue where Puppet was not properly syncing code from the code-staging directory to the code directory.  I verified it was pulling the new code from my Git repository to code-staging without issue.  However, file-sync was not pushing the new code to the code directory.

Here is what I was seeing in the /var/log/puppetlabs/puppetserver/puppetserver.log

2017-06-26 11:08:49,026 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:54,051 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:59,077 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:04,103 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:09,129 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:14,155 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.

I had no idea what this meant, and I wasn’t sure how to resolve it so I took a snapshot of my Puppet Master VM and tried a few things.

The first thing I tried was going to the directory indicated and taking a look:

ll /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code/production.git/
total 44
drwxr-xr-x 7 pe-puppet pe-puppet 4096 Jun 26 11:03 ./
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 ../
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 branches/
-rw-r—– 1 pe-puppet pe-puppet 307 Jun 26 11:03 config
-rw-r—– 1 pe-puppet pe-puppet 148 Jun 26 10:28 FETCH_HEAD
-rw-r–r– 1 pe-puppet pe-puppet 23 Apr 25 2016 HEAD
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 hooks/
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 logs/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Jun 26 10:28 objects/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Apr 25 2016 refs/
-rw-r—– 1 pe-puppet pe-puppet 41 Jun 26 10:28 synced-commit

/opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git/production.git had the same contents but for one file:

synced-commit.lock

I wasn’t sure this file belonged there, so I  removed it.  Once I did that, the file-sync service stopped throwing errors and successfully synced my files!

Hope this helps!

Automating MAK Proxy Activation with PowerShell

I ran into a need recently where I had to activate Windows on new machines in an automated fashion.  The issue was that the environment did not use KMS, but instead activated new machines using a MAK key.  The machines being activated did not have Internet access, so they had to be activated via proxy.

There is a great article on how to do this using the Volume Activation Management Tool (VAMT) here.  Basically, enable Internet access (or at least access to the MS Activation servers) to a machine with the VAMT installed and you can use the GUI to activate it.  If you need to automate it, you can see instructions on the PowerShell commands for VAMT here.

This all works very well, but not complete for my needs.  I needed have a different server other than the VAMT server initiate the activation.  To do this, I wrapped the VAMT commands I needed in a PowerShell function detailed further below.  With this function, you can have any server issue the commands to the VAMT server to add and activate multiple severs on your network in an automated fashion.

I found one big caveat though.  You need to enable Kerberos Delegation for BOTH the VAMT server and the server running this function.  This is done by issuing the command below in PowerShell:

Set-AdComputer -Identity computerName -TrustedForDelegation $true 

The reason for this is the server running this function must pass the credentials of the user running it to the VAMT cmdlets so they can run.  In turn, the Find-VamtManagedMachine cmdlet must also pass those credentials to Active Directory to look the machine up.  If you forget to do this, you will get errors.

Here is the function:

function Invoke-WindowsActivation()
{
<#
.SYNOPSIS
This function reaches out remotely to the specified VAMT server and activates the given machines by proxy. To run this, you must meet the following requirements:
* The ActiveDirectory module from Microsoft be installed on the machine this function runs from. Install with:
Add-WindowsFeature
* It's assumed the machines you are dealing with are on an Active Directory domain.
* You have a server with the VAMT 3.0 installed.
.PARAMETER ComputerName
Specifies one or more computers to activate.
.PARAMETER Domain
Specifies the AD domain the VAMT server and the machines you are activating are on. Default is the current user DNS Domain ($ENV:USERDNSDOMAIN).
.PARAMETER VamtServer
Specifies the machine the VAMT toolset is installed on. This machine needs the Windows Assessment and Deployment Kit (VAMT Tool) installed. See:
https://www.microsoft.com/en-us/download/details.aspx?id=30652
https://technet.microsoft.com/en-us/library/hh825184.aspx
.EXAMPLE
Invoke-WindowsActivation -ComputerName myserver1,myserver2 -VamtServer vamt01
ActionsAllowed : 105
ApplicationName :
ApplicationId : xxxxx
CMID :
ConfirmationId :
ExportGuid : xxxxx
FullyQualifiedDomainName : myserver1.mydomain.com
GenuineStatus : Genuine
GraceExpirationDate : 4/17/2017 9:56:23 PM
InstallationId : xxxxx
KmsHost :
KmsPort :
LastActionStatus : Successfully updated the product information.
LastErrorCode : 0
LastUpdated : 4/17/2017 9:56:23 PM
LicenseFamily : ServerDatacenter
LicenseStatus : Licensed
LicenseStatusLastUpdated : 4/17/2017 9:56:23 PM
LicenseStatusReason : 0
PartialProductKey : xxxx
ProductDescription : Windows(R) Operating System, VOLUME_MAK channel
ProductKeyId : xxx
ProductName : Windows(R), ServerDatacenter edition
ProductKeyType : Mak
ProductVersion : 6.3.9600.17809
Sku : xxxxx
ProductKeyTypeName :
LicenseStatusText :
GenuineStatusText :
ResourceLanguage :
SoftwareProtectionService : SPP
VLActivationType : NeverVolumeActivated
VLActivationTypeEnabled : Default
AdActivationObjectName :
AdActivationObjectDN :
AdActivationCsvlkPid :
AdActivationCsvlkSkuId : 00000000-0000-0000-0000-000000000000
#>
[CmdletBinding(SupportsShouldProcess=$true)]
param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)] $ComputerName,
[string] $Domain = $ENV:UserDnsDomain,
[Parameter(Mandatory=$true)] [string] $VamtServer
)
begin
{
function Test-Kerberos()
{
[CmdletBinding()]
param
(
[Parameter(Mandatory=$true)] $ComputerName
)
Import-Module ActiveDirectory
$c = Get-AdComputer Identity $ComputerName Properties TrustedForDelegation
return ( $c.TrustedForDelegation )
}
if ( !(Test-Kerberos ComputerName $VamtServer) )
{
throw ("The VAMT Server ($VamtServer) does not have Kerberos delegation enabled! Use: Set-AdComputer -Identity $VamtServer -TrustedForDelegation $true")
}
if ( !(Test-Kerberos ComputerName $Env:COMPUTERNAME) )
{
throw ("This client ($Env:COMPUTERNAME) does not have Kerberos delegation enabled! Use: Set-AdComputer -Identity $VamtServer -TrustedForDelegation $true")
}
# You must use a 32-bit PowerShell session! VAMT.psd1 does not support 64-bit.
$session = New-PSSession ComputerName $VamtServer ConfigurationName Microsoft.PowerShell32
$sb = `
{
$psdPath = ""
if ( Test-Path Path "HKLM:\SOFTWARE\Wow6432Node\Microsoft\VAMT3" )
{
$psdPath = Get-ItemProperty Path "HKLM:\SOFTWARE\Wow6432Node\Microsoft\VAMT3" Name "SchemaFilePath" | Select-Object ExpandProperty SchemaFilePath
}
else
{
throw ("VAMT3 is not installed on the local machine: $($ENV:COMPUTERNAME)!")
}
Write-Verbose ("VAMT Module location: $psdPath")
Import-Module Name (Join-Path Path $psdPath ChildPath "vamt.psd1")
}
$psdPath = Invoke-Command Session $Session ScriptBlock $sb
}
process
{
try
{
foreach ( $comp in $ComputerName )
{
$sb = `
{
param
(
[Parameter(Mandatory=$true)] $ComputerName,
[string] $Domain = $ENV:UserDnsDomain
)
$product = Find-VamtManagedMachine QueryType ActiveDirectory QueryValue $Domain MachineFilter $ComputerName
if ( !$product )
{
throw ("Unable to find a computer in the VAMT Database named $ComputerName! Verify Kerberos delegation is enabled for both $($ENV:ComputerName) and $ComputerName! Set-AdComputer -Identity $ComputerName -TrustedForDelegation `$true ")
}
Write-Host ("Product Entry:")
Write-Host ($product | Format-List | Out-String)
if ( $product.GenuineStatus -ine "Genuine" )
{
# Get the confirmation ID:
$confirmation = $product | Get-VamtConfirmationId
if ( $confirmation.ConfirmationId )
{
$out = Install-VamtConfirmationId Products $confirmation
$output = Find-VamtManagedMachine QueryType ActiveDirectory QueryValue $Domain MachineFilter $ComputerName
Write-Host ("Activated server: ")
Write-Host ($output | Format-List | Out-String)
$output
if ( $output.GenuineStatus -ine "Genuine" )
{
throw ("An error occurred activating Windows OS on $comp. `r`nError message: $($output.LastActionStatus).")
}
}
else
{
throw ("Unable to get a confirmation ID for machine $ComputerName!")
}
}
else
{
Write-Warning ("$ComputerName has already been activated!")
$product
}
}
if ( $PSCmdlet.ShouldProcess($comp, "Activate Windows machine") )
{
Invoke-Command Session $session ScriptBlock $sb ArgumentList $comp,$Domain
}
}
}
catch
{
if ( $session )
{
$session | Remove-PSSession
}
throw $_
}
}
end
{
if ( $session )
{
$session | Remove-PSSession
}
}
}

 

Hopefully, this is of use to others.

Encrypting Credentials In PowerShell Scripts

I have a long-standing dislike of hard-coding credentials in scripts.  In a production environment, it’s never a good idea to leave sensitive account passwords hard-coded in plain text in scripts.  To that end, I’ve developed an easy method in PowerShell to protect sensitive information.

The functions I present below allow you to store usernames and passwords, where the passwords are encrypted, in a form that can be later decrypted inside a script.  By default, only the user account that encrypted the credentials can decrypt them, and only from that same machine.  It all uses native .NET stuff, so you don’t need any third-party stuff to get it working.

Where I find this most useful is for services or scheduled tasks that run as system accounts that execute PowerShell scripts.  You can log into the machine as that service account, encrypt a set of credentials, then when that scheduled task runs as that service account it is able to read them.

Using the export function I show below, you can either export your credentials to an xml file on the file system, or a registry value in the Windows registry.

Here is an example:

First, save the credential to a variable and export it to an xml file:

$cred = Get-Credential username
$cred | Export-PSCredential -Path c:\temp\creds.xml

This outputs the path to the xml file you created with the encrypted credentials:

Export-PSCredential

Alternately, you can export to a registry key instead:

$cred = Get-Credential username
$cred | Export-PSCredential -RegistryPath HKCU:\software\test -Name mycreds

In the registry, you can see your exported credentials:

Export-Registry

https://gist.github.com/BrandonStiff/02cada362bfca007d298b549506f225f.js

The major thing that needs to be understood about this is the encryption key that is used to encrypt these credentials is tied to both the userid used to encrypt them AND the machine you encrypted from.  Unless you specify a keyphrase, you cannot decrypt these credentials as another user or from another machine.  The idea is if you have a script that reads these encrypted credentials, you have to log in as the user the script runs as on the machine the script runs from and encrypt them.  However, as described above, if you provide a keyphrase, you can decrypt them from anywhere as any user.  You just have to somehow protect the keyphrase.

Importing the credentials again is pretty simple:

$cred = Import-PSCredential -Path C:\temp\creds.xml
# OR
$cred = Import-PSCredential -RegistryPath HKCU:\Software\test -Name mycreds

Import-PSCredential

Specifying a keyphrase involves specifying the -KeyPhrase parameter on either the import or export function.

Below is the code.  Simply paste these three functions into your PowerShell session or into your script and away you go.

function Get-EncryptionKey()
{
<#
.SYNOPSIS
Retrieves a 128/192/256-bit encryption key using the given keyphrase.
.PARAMETER KeyPhrase
Specifies the phrase to use to create the 128-bit key.
.PARAMETER Length
Specifies the number of bits to make the length. Use either 128, 192, or 256 bits. Default is 128.
.OUTPUTS
[byte[]]
Returns a 128/192/256-bit (32/48/64-byte) array that represents the keyphrase.
#>
[CmdletBinding()]
param
(
[Parameter(Mandatory=$true,Position=0,ValueFromPipeline=$true)]
[string] $KeyPhrase,
[ValidateSet(128,192,256)] [int] $Length = 128
)
process
{
$enc = [System.Text.Encoding]::UTF8;
$bytes = $Length / 4;
$KeyPhrase = $KeyPhrase.PadRight($bytes, "0").SubString(0,$bytes);
$enc.GetBytes($KeyPhrase);
}
}

function Export-PSCredential
{
<#
.SYNOPSIS
Exports a credential object into an XML file or registry value with an encrypted password. An important note is that the encrypted password can ONLY be read by the user who created the exported file
unless a passphrase is provided.
.PARAMETER Credential
Specifies the Credential to export to a file. Use Get-Credential to supply this.
.PARAMETER Path
Specifies the file to export to. Default is (CurrentDir)\encrypted.xml.
.PARAMETER RegistryPath
Specifies the path to the registry to export the credentials to. Use HKLM and HCKU for HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER respectively. Example: HKCU:\Software\Acme Inc\MyCredentials
.PARAMETER Name
Specifies the name of the registry value to store the credentials under. Only specify with RegistryPath.
.PARAMETER KeyPhrase
Specifies the key phrase to use to encrypt the password. If not specified, then a key derived from the user's account is used. This makes the password only decryptable by the user who encrypted it.
If a key is specified, then anybody with the key can decrypt it.
.EXAMPLE
PS> (Get-Credential bsti) | Export-PSCredential
# Encrypts the credential for username bsti and exports to the current directory as encrypted.xml
.EXAMPLE
PS> (Get-Credential bsti) | Export-PSCredential -Path C:\temp\mycreds.xml
# Encrypts the credential for username bsti and exports to the current directory as encrypted.xml
.EXAMPLE
PS> (Get-Credential bsti) | Export-PSCredential -RegistryPath "HKCU:\Software\Acme Inc\MyCreds" -Name "switch1"
# Encrypts the credential for username bsti and exports to the registry at the given path, under the value switch1.
.EXAMPLE
PS> (Get-Credential bsti) | Export-PSCredential -Path C:\temp\mycreds.xml -KeyPhrase "ThisisMyEncryptionPassword123"
# Encrypts the credential for username bsti and exports it to the filesystem. Anyone with the keyphrase can decrypt it.
.OUTPUTS
Returns the [System.IO.FileInfo] object representing file that was created or the path to the registry key the credentials were exported to.
#>
[CmdletBinding(SupportsShouldProcess=$true,DefaultParameterSetName="filesystem")]
param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[Management.Automation.PSCredential] $Credential,
[Parameter(ParameterSetName="filesystem")]
[ValidateScript({ Test-Path Path (Split-Path Path $_) PathType Container } )]
[string] $Path = $(Join-Path Path (Get-Location) ChildPath "encrypted.xml"),
[Parameter(Mandatory=$true,ParameterSetName="registry")]
[string] $RegistryPath,
[Parameter(Mandatory=$true,ParameterSetName="registry")]
[string] $Name,
[string] $KeyPhrase
)
process
{
foreach ( $cred in $Credential )
{
# Create temporary object to be serialized to disk
$export = "" | Select-Object Username, EncryptedPassword
# Give object a type name which can be identified later
$export.PSObject.TypeNames.Insert(0,"ExportedPSCredential")
$export.Username = $Credential.Username
# Encrypt SecureString password using Data Protection API
# Only the current user account can decrypt this cipher unless a key is specified:
$params = @{}
if ( $KeyPhrase )
{
$params.Add("Key", (Get-EncryptionKey KeyPhrase $KeyPhrase))
}
$export.EncryptedPassword = $Credential.Password | ConvertFrom-SecureString @params
if ( $PSCmdlet.ParameterSetName -ieq "registry" )
{
# Export to registry
# Make sure the registry key exists:
if ( !(Test-Path Path $RegistryPath) )
{
New-Item Path $RegistryPath Force | Out-Null
}
# Set/Update the credential in the registry store:
Set-ItemProperty Path $RegistryPath Name $Name Value ("{0}:{1}" -f $export.UserName, $export.EncryptedPassword) Force
}
else
{
# Export using the Export-Clixml cmdlet
$export | Export-Clixml $Path
# Return FileInfo object referring to saved credentials
Get-Item Path $Path
}
}
}
}

function Import-PSCredential
{
<#
.SYNOPSIS
Imports a credential exported by Export-PSCredential and returns a Credential.
.PARAMETER Path
Specifies one or more files to convert from XML files to credentials.
.PARAMETER RegistryPath
Specifies the path in the registry to look for the encrypted credentials.
.PARAMETER Name
Specifies the registry key the credentials are stored under.
.PARAMETER KeyPhrase
Specifies the key phrase to use to encrypt the password. If not specified, then a key derived from the user's account is used. This makes the password only decryptable by the user who encrypted it.
If a key is specified, then anybody with the key can decrypt it.
.EXAMPLE
Import-PSCredential -Path C:\temp\mycreds.xml
# Retrieves encrypted credenials from the given file.
.EXAMPLE
Get-ChildItem C:\temp\credstore | Import-PSCredential
# Retrieves encrypted credenials from files in the given directory.
.EXAMPLE
Import-PSCredential -RegistryPath "HKCU:\Software\Acme Inc\MyCreds" -Name switch1
# Retrieves encrypted credenials from the registry path: "HKCU:\Software\Acme Inc\MyCreds" Key switch1
.EXAMPLE
Import-PSCredential -Path C:\temp\mycreds.xml -KeyPhrase "test12345"
# Retrieves encrypted credenials from the filesystem. Decrypts them using the given key.
.OUTPUTS
[System.Management.Automation.Credential]
Outputs a credential object representing the cached credentials. Use GetPlainTextPassword() to retrieve the plain text password.
#>
[CmdletBinding(DefaultParameterSetName="filesystem")]
param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="filesystem")]
[ValidateScript({ Test-Path Path $_ PathType Leaf } )] [String[]] $Path,
[Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="registry")]
[string] $RegistryPath,
[Parameter(Mandatory=$true,ParameterSetName="registry")]
[string] $Name,
[string] $KeyPhrase
)
begin
{
$paths = @()
}
process
{
if ( $PSCmdlet.ParameterSetName -ieq "registry" )
{
$paths += $RegistryPath
}
else
{
$paths += $Path
}
foreach ( $p in $paths )
{
$import = $null
if ( $PSCmdlet.ParameterSetName -ieq "registry" )
{
# Imported from registry:
$import = "" | Select-Object "UserName","EncryptedPassword"
# Make sure the registry key exists:
if ( Test-Path Path $p )
{
$regValue = Get-ItemProperty Path $p | Where-Object { $_.$Name }
if ( $regValue )
{
$credsAsString = (Get-ItemProperty Path $p).$Name
if ( ($credsAsString -split ":").Count -lt 2 )
{
throw ("Credential was stored in an invalid format!")
}
$import.UserName = ($credsAsString -split ":")[0]
$import.EncryptedPassword = ($credsAsString -split ":")[1]
}
}
}
else
{
$fileFullPath = $p
if ( $p -is [System.IO.FileInfo] )
{
$fileFullPath = $p.FullName
}
# Import credential file
$import = Import-Clixml $fileFullPath
}
if ( $import -and $import.UserName -and $import.EncryptedPassword )
{
$userName = $import.Username
# Decrypt the password and store as a SecureString object for safekeeping
try
{
$params = @{};
if ( $KeyPhrase )
{
$params.Add("Key",(Get-EncryptionKey KeyPhrase $KeyPhrase));
}
$securePass = $import.EncryptedPassword | ConvertTo-SecureString ErrorAction Stop @params;
}
catch [System.FormatException]
{
throw ("An invalid encryption key was supplied! If this credential was encrypted with a KeyPhrase, you must use the correct keyphrase to decrypt it!");
}
catch [System.Security.Cryptography.CryptographicException]
{
throw ("Invalid encryption key! If no key is specified, then only the user that exported the credential in file $fileFullPath can retrieve it! Current user $($env:UserDomain)\$($env:UserName) may not have access!");
}
catch
{
throw $_;
}
# Build the new credential object
Get-PSCredential Credential (New-Object System.Management.Automation.PSCredential $userName, $securePass);
}
}
}
}

Note the Get-EncryptionKey function is required for both the import and export functions!

vRealize Orchestrator HTTP-REST – Cannot execute the request; Read timed out

I recently stumbled upon an issue with the HTTP-REST plugin in VRO that took some experimentation to understand.  For some reason, I kept getting a “Read timed out” error when my workflows would make a REST call that took more than 60 seconds to return a response.  There is an operationTimeout property you can set to govern this, but I found it is ignored under certain circumstances. It’s very confusing since you can examine the opeationTimeout property and it *appears* correct. I had to do quite a bit of testing to get to the bottom of the behavior.

In my implementation, I was using VRO 7.2 and transient HTTP REST host objects to do my REST calls. I favored that approach over using the HTTP-REST configuration workflows to add and remove every host and combination of operations I could some day invent. This approach seemed somewhat inflexible.

Here is my basic testing workflow:

Test Workflow

Here is the code in the script:

//  Username, password, and useTransientHost are input parameters.

var uri = "https://myrestapihost.domain.com/api/DoSomething/id44";
var method = "GET";
var body = "";  // For POST/PUT body content.  This has to be a JSON string. E.g.  body = "{ 'p1' : 1, 'p2' : 2 }";
var httpRestHost = null;

if ( useTransientHost )
{
  System.log("Using Transient host.");
  //  Create a dynamic REST host:

  var restHost = RESTHostManager.createHost("dynamicRequest");
  restHost.operationTimeout = 900;  //  This gets ignored!!!
  httpRestHost = RESTHostManager.createTransientHostFrom(restHost);
  httpRestHost.operationTimeout = 900;  //  Set it here too, just to be really really sure.
}
else
{
  System.log("Using NON-Transient host.");
  httpRestHost = RESTHostManager.getHost("71998784-d590-426d-8945-75ec0b1ad7b4");		//  Use the ID For your HTTP-REST host here
  httpRestHost.operationTimeout = 900;  //  This gets ignored!!!
}

System.log("OperationTimeout  is set to: " + httpRestHost.operationTimeout.toString());

//  Create the authentication:
var authParams = ['Shared Session', userName, password];
var authenticationObject = RESTAuthenticationManager.createAuthentication('Basic', authParams);
httpRestHost.authentication = authenticationObject;

//  Remove the endpoint from the URI:
var urlEndpointSplit = uri.split("/");
var urlEndpoint = urlEndpointSplit[urlEndpointSplit.length - 1];
uri = uri.split(urlEndpoint)[0];

httpRestHost.url = uri;

//  REST client only accepts method in all UPPER CASE:
method = method.toUpperCase();

var request = httpRestHost.createRequest(method, urlEndpoint, body);
request.contentType = "application/json";

System.debug("REST request to URI: " + method + " " + request.fullUrl);

var response = request.execute();   //  This should have a 90-second timeout
System.debug("Response status Code: " + response.statusCode);

if ( response.contentAsString )
{
  System.debug("Response: " + response.contentAsString);
}

I added three input parameters:

  • userName
  • password
  • useTransientHost

When I call it using a transient host, it always times out in 60s, no matter what I set the operationTimeout setting to. Here is the output from the run:

[2017-03-28 11:51:55.740] [I] Using Transient host.
[2017-03-28 11:51:55.747] [I] OperationTimeout is set to: 900
[2017-03-28 11:51:55.751] [D] REST request to URI: GET https://myrestapihost.domain.com/api/DoSomething/id44
[2017-03-28 11:52:55.902] [E] Error in (Workflow:Example REST API Call / HTTP Rest Call (item1)#43) Cannot execute the request: ; Read timed out

You can see the 60s timeout despite the fact the operationTimeout property was set to 900.

I ran it again and referenced a non-transient HTTP host:

2017-03-28 12:00:24.618] [I] Using NON-Transient host.
[2017-03-28 12:00:24.629] [I] OperationTimeout is set to: 900
[2017-03-28 12:00:24.634] [D] REST request to URI: GET https://myrestapihost.domain.com/api/DoSomething/id44
[2017-03-28 12:02:24.740] [E] Error in (Workflow:Example REST API Call / HTTP Rest Call (item1)#46) Cannot execute the request: ; Read timed out

In this case, it timed out in 120 seconds, not 60 (or 900). I found 120 came from what I entered in for operationTimeout when I created the host using the HTTP-REST/Configuration/Add a REST host workflow:

Test Host Settings

So, in the end, the following appears to be true:

  • OperationTimeout defaults to 60 seconds.
  • Though it appears you can, you CANNOT override by setting the operationTimeout property in code (this should be a read-only property if that is the case)
  • It instead uses the operationTimeout set on the HTTP-REST host object when you create (or update) it using the configuration workflows.
  • Transient hosts are always at 60s timeouts. No way to override this.

However:

You CAN override just about everything else, including URI and authentication.

This means I can get around this by adding a dummy HTTP-REST host as per normal using the Add a REST host workflow:

TestHost1TestHost2TestHost3

The URL, Authentication and other settings do not matter, they can be overridden in your code as I did above in my example. The ONLY setting that matters is the operationTimeout (and perhaps the connectionTimeout, which stands to reason may have the same issue, but I never tested it).

Then reference the host ID as I did in the code above and override the URL, authentication, and whatever else you need to.

I’ve engaged VMware tech support to log this as a bug. I really think the operationTimeout should be settable or read-only.  We’ll see where that goes…

UPDATE 4/6/2017 – I just got final word back from VMware tech support engineering.  The behavior I noted above is normal behavior, and the workaround I proposed is the accepted workaround.  Nothing to see here…

I did ask for a feature request to either have the operationTimeout property be programmatically changeable or to be set as read-only to reduce confusion.

Puppet Enterprise – Adding Windows Scheduled Tasks

So, continuing on the path I’ve been on, I’ve had to create quite a few custom “resources” in my Puppet profiles to deploy or configure items I could not find right out-of-the-box.  In this case, I have a server that requires a standard set of Windows scheduled tasks.

For this purpose, I created a new pseudo-resource called “windows_scheduled_task”.  As with the other items I’ve published, I call this a pseudo-resource because it’s not really a Puppet resource.  It’s a custom class that is used just like a resource.  The approach I took here leverages PowerShell and assumes the presence of the ScheduledTasks module, which is only available in PowerShell v4 and higher.

The class requires the use of a module class (.pp file) and an accompanying template file (.epp).  The .pp file goes in the manifests folder in your module, and the template in your templates folder.  The assumed folder structure is like so:

/manifests/windows_server
  /scheduled_task.pp
/templates/windows_server
  /scheduled_task_add.epp

If you change the paths, that’s OK, but you have to make sure the class namespace in the .pp file matches your new folder structure. The default is

class windows_server::scheduled_task()

which assumes the folder /manifests/windows_server

You also have to make sure the epp() function call in the .pp file references the correct path to the template (if you change it). Right now, it’s set to look at /templates/windows_server/scheduled_task_add.epp.

Here is the .pp file class:

class windows_server::scheduled_task()
{

  define windows_scheduled_task
  (
    String $description = "No description.",
    String $path = "",
    String $executionTimeLimit = "01.00:00:00",
    String $userName = "NT AUTHORITY\\SYSTEM",
    String $password = "",
    Boolean $deployEnabled = true,
    Array[Hash] $actions,
    Array[Hash] $triggers = []
  )
  {
    #  name (string)                - Specifies the name of the task
    #  description (string)         - Specifies a description of the task
    #  path (string)                - Specifies the folder to place the task in.  Default is "\" (the root folder).  NOTE:  This must begin with a slash but not end with one!  Example:  /Restore
    #  executionTimeLimit (string)  - Specifies the length of time the task can run before being automatically stopped.  Specify as a TimeSpan.
    #  deployEnabled (bool)         - Determines whether the task should deployed in an enabled state or not.  This state is not enforced going forward.
    #  actions (Hash[]) -
    #    workingDirectory (string)      - Specifies the working directory for the action.  Default is C:\windows\system32
    #    command (string)               - Specifies the command to execute.
    #    arguments (string[])           - Specifies the arguments to pass to the command.
    #    isPowerShell (bool)            - If specified, then the command and arguments are automatically constructed.  You only need pass the powershell script you want to run for the command.

    #  triggers (Hash[]) -
    #    atDateTime (String)          - Specifies the date and time to start running the task.
    #    repetitionInterval (string)  - Specifies how often to re-run the task after the atDateTime occurs.  Specify as a Timespan.
    #    repetitionDuration (string)  - Specifies how long to repeat the task executions for.  Specify as a Timespan.  Default is [Timespan]::MaxValue (forever)

    #  If your command is a PowerShell script, you have to escape double-quotes with backslashes.
    #  Example:
    #  windows_server::scheduled_task::windows_scheduled_task { 'Test Scheduled Task':
    #   userName          =>  $taskCredentials['userName'],
    #   password          =>  $taskCredentials['password'],
    #   path              => '\MyTasks',
    #   actions           => [{
    #    isPowerShell        => true,
    #    command             => "c:\\scripts\\Run-MyPowerShellScript.ps1 -Param1 value1 -Param2 \"value 2\" -Param3 ${puppetVariableHere}  "
    #   }],
    #   triggers              => [{
    #    atDateTime          => "9/1/2016 12:30 AM",
    #    repetitionInterval  => "00:30:00"
    #   }],
    #}

    exec { "scheduled_task_${title}" :
      command       => epp("windows_server/scheduled_task_add.epp", {
                        name                => $name,
                        description         => $description,
                        path                => $path,
                        executionTimeLimit  => $executionTimeLimit,
                        userName            => $userName,
                        password            => $password,
                        deployEnabled       => $deployEnabled,
                        actions             => $actions,
                        triggers            => $triggers
                      }),
      onlyif        => "if ( ScheduledTasks\\Get-ScheduledTask | Where-Object { \$_.TaskName -ieq \"${name}\" -and \$_.TaskPath -ieq \"${path}\\\" } ) { \$host.SetShouldExit(99); exit 99 }",
      returns       => [0],
      provider      => powershell,
      logoutput     => true,
    }
  }
}

The template file is here:

<%- | String $name,
      String $description = "No description",
      String $path = "\\",
      String $executionTimeLimit = "01.00:00:00",
      String $userName = "NT AUTHORITY\\SYSTEM",
      String $password = "",
      Boolean $deployEnabled = true,
      Array[Hash] $actions,
      Array[Hash] $triggers = []
|
  #  name (string) - Specifies the name of the task
  #  description (string) - Specifies a description of the task
  #  path (string) - Specifies the folder to place the task in.  Default is "\" (the root foler)
  #  executionTimeLimit (string) - Specifies the length of time the task can run before being automatically stopped.  Specify as a TimeSpan.
  #  userName (string) - Specifies the user to execute the task as.  Default is local system,.
  #  password (string) - Specifies the password for the given user.
  #  actions (Hash[]) -
  #    workingDirectory (string) - Specifies the working directory for the action.  Default is C:\windows\system32
  #    command (string) - Specifies the command to execute.
  #    arguments (string[]) - Specifies the arguments to pass to the command.
  #    isPowerShell (bool) - If specified, then the command and arguments are automatically constructed.  You only need pass the powershell script you want to run for the command.

  #  triggers (Hash[]) -
  #    atDateTime (String) - Specifies the date and time to start running the task.
  #    repetitionInterval (string) - For daily repetition - Specifies how often to re-run the task after the atDateTime occurs.  Specify as a Timespan.
  #    repetitionDuration (string) - For daily repetition - Specifies how long to repeat the task executions for.  Specify as a Timespan.  Default is [Timespan]::MaxValue (forever)
  #    daysOfTheWeek (Array[string]) - For weekly repetition - Specifies the days of the week to run the task.  Specify an array of Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday
  #    weeksInterval (Integer) - For weekly repetition - Specifies whether to run the schedule every week (value 1) or every n weeks (value n).  Default is every 1 week.
%>
$acts = @();
<% $actions.each | Hash $act | { -%>
$arg = @();
<%  if ( $act['isPowerShell'] )
{
  $cmd = "powershell.exe"
-%>
$arg += "-noprofile"
$arg += "-command `"<%= regsubst($act['command'],'\"', '\\\`"', 'GI') -%>`""
<% }
else
{
  $cmd = $act['command']
  if ( $act['arguments'] and is_array($act['arguments']) )
  {
    $act['arguments'].each | String $ar |
    { -%>
$arg += "<%= $ar -%>";
<%
    }
  }
  else
  { -%>
$arg += "<%= $act['arguments'] -%>"
<%}
}
if ( $act['workingDirectory'] )
{
  $wd = "-WorkingDirectory \"${act['workingDirectory']}\" "
}
else
{
  $wd = ""
} -%>
$params = @{}
if ( $arg )
{
  $params.Add("Argument", ($arg -join " "))
}

$acts += New-ScheduledTaskAction <%= $wd -%>-Execute "<%= $cmd -%>" @params
<% } -%>

$params = @{};
$trigs = @();
<% $triggers.each | Hash $trig |
{
  if ( $trig['weeksInterval'] or $trig['daysOfTheWeek'] )
  {
    #  Weekly Trigger:
    if ( $trig['weeksInterval'] )
    {
      $weeksInterval = $trig['weeksInterval']
    }
    else
    {
      $weeksInterval = 1
    }
-%>
$trigs += New-ScheduledTaskTrigger -Weekly -At "<%= $trig['atDateTime'] -%>" -WeeksInterval <%= $weeksInterval %> -DaysOfWeek <%= $trig['daysOfTheWeek'].join(",") %>;
<%
  }
  else
  {
    if ( $trig['repetitionDuration'] )
    {
      $repDuration = "<%= $trig['repetitionDuration'] -%>"
    }
    else
    {
      $repDuration = "([TimeSpan]::MaxValue)"
    }
#  Daily Trigger:
-%>
$trigs += New-ScheduledTaskTrigger -Once -At "<%= $trig['atDateTime'] -%>" -RepetitionInterval "<%= $trig['repetitionInterval'] -%>" -RepetitionDuration <%= $repDuration -%>;
<%
  }
}
-%>
if ( $trigs )
{
  $params.Add("Trigger", $trigs);
}

<% if ( $path == "" )
{
  $taskPath = "\\"
}
else
{
  $taskPath = $path
}
-%>
$sett = New-ScheduledTaskSettingsSet -ExecutionTimeLimit "<%= $executionTimeLimit -%>" -RunOnlyIfIdle:$false -DontStopOnIdleEnd;
$task = Register-ScheduledTask -TaskName "<%= $name -%>" -TaskPath "<%= $taskPath -%>" -Action $acts -Force -User "<%= $userName -%>" -Settings $sett<% if ( $password != "" ) { %> -Password "<%= $password -%>"<% } %> -RunLevel Highest @params;
<% if ( $deployEnabled == false ) { -%>
$task = $task | Disable-ScheduledTask;
<% } -%>

You can get both in my PuppetResources GitHub repo here.

Here is an example of a sample Scheduled Task:

mymodule::scheduled_task::windows_scheduled_task { 'Sample Scheduled Task':
    userName          =>  'MyTaskUserName',
    password          =>  'MyTaskPassword',
    deployEnabled     =>  true,
    description       => 'This task does some stuff.',
    actions           => [{
      command             => "c:\\scripts\\test-powershellscript.ps1",
      isPowerShell        => true
    }],
    triggers              => [{
      atDateTime          => "9/1/2016 11:00 PM",
      weeksInterval       => 1,
      daysOfTheWeek       => ["Monday","Tuesday","Wednesday","Thursday","Friday"]
    }],
  }

Enjoy!