Gitlab – Webhooks stop working – Internal Error 500

After my latest update to Gitlab (to version 10.7.3 ee), I found my webhooks on several of my repositories stopped working.  When I’d manually test them from the Gitlab Interface, I got a generic 500 error:

Gitlab 500

I figured this had something to do with the recent update, so I dug into the logs.  I found the issue in /var/log/gitlab/gitlab-rails/production.log:

Gitlab::HTTP::BlockedUrlError (URL ‘https://myserver.domain.com:8170/code-manager/v1/webhook?type=github&token=blahblahblahblah’ is blocked: Requests to the local network
are not allowed):
lib/gitlab/proxy_http_connection_adapter.rb:17:in `rescue in connection’
lib/gitlab/proxy_http_connection_adapter.rb:14:in `connection’
app/services/web_hook_service.rb:73:in `make_request’
app/services/web_hook_service.rb:26:in `execute’
app/models/hooks/web_hook.rb:10:in `execute’
app/services/test_hooks/base_service.rb:22:in `block in execute’
app/services/test_hooks/base_service.rb:19:in `catch’
app/services/test_hooks/base_service.rb:19:in `execute’
app/controllers/projects/hooks_controller.rb:41:in `test’
lib/gitlab/i18n.rb:50:in `with_locale’
lib/gitlab/i18n.rb:56:in `with_user_locale’
app/controllers/application_controller.rb:334:in `set_locale’
lib/gitlab/middleware/multipart.rb:95:in `call’
lib/gitlab/request_profiler/middleware.rb:14:in `call’
ee/lib/gitlab/jira/middleware.rb:15:in `call’
lib/gitlab/middleware/go.rb:17:in `call’
lib/gitlab/etag_caching/middleware.rb:11:in `call’
lib/gitlab/middleware/read_only/controller.rb:28:in `call’
lib/gitlab/middleware/read_only.rb:16:in `call’
lib/gitlab/request_context.rb:18:in `call’
lib/gitlab/metrics/requests_rack_middleware.rb:27:in `call’
lib/gitlab/middleware/release_env.rb:10:in `call’

The whole “requests to the local network are not allowed” thing was new to me, so I found this:

https://docs.gitlab.com/ee/user/project/integrations/webhooks.html

The comments at the bottom showed me the way.  THere is a new setting you have to enable:

  1. Log into Gitlab
  2. Go to the admin area
  3. Go to Settings
  4. Go to Outbound Requests
  5. Click the “Allow requests to the local network from hooks and services” button.
  6. Save the changes

Allow Outbound

 

Viola!  It works now.

Setting Up a vSphere Service Account for Pivotal BOSH Director using PowerCLI

BOSH Director requires a fairly powerful vCenter service account to do all of the things it does.

The list of permissions required is here, and it’s extensive.

You can always take the shortcut and make your account an Administrator of the vSphere environment, but that violates the whole “least privilege” principle and I don’t like that in production environments.

I wrote a working PowerCLI code function that will automatically create this vCenter role and add the specified user/group to it.

It greatly reduces the time to set this up.  Hope this helps someone out.

 

Pivotal BOSH Director Setup Error – Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I was trying to install Pivotal Kubernetes Services on vSphere.  I setup the inital Operations Manger .ova appliance without issue.  Since I was deploying on vSphere, I needed to configure the BOSH Director installation through the vSphere tile next.  I ran through the configuration and tried to deploy once.. and it failed.  I tried again, and was dead-stopped at the above error over and over again.  I believe this came up because I deleted the BOSH/0 VM and tried to have the installer run again.

When in this state, it continually fails with the following error:

Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I had no idea what that meant, so I found this on the tech support site:
https://discuss.pivotal.io/hc/en-us/articles/115000488247-OpsManager-Install-Updates-error-Could-not-find-VM-for-stemcell-xxxxx-

Same error, but I didn’t have BOSH director even setup yet so it didn’t apply.

The full log readout is below:

{“type”: “step_started”, “id”: “bosh_product.deploying”}
===== 2018-05-10 20:29:13 UTC Running “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”
Deployment manifest: ‘/var/tempest/workspaces/default/deployments/bosh.yml’
Deployment state: ‘/var/tempest/workspaces/default/deployments/bosh-state.json’

Started validating
Validating release ‘bosh’… Finished (00:00:00)
Validating release ‘bosh-vsphere-cpi’… Finished (00:00:00)
Validating release ‘uaa’… Finished (00:00:01)
Validating release ‘credhub’… Finished (00:00:00)
Validating release ‘bosh-system-metrics-server’… Finished (00:00:01)
Validating release ‘os-conf’… Finished (00:00:00)
Validating release ‘backup-and-restore-sdk’… Finished (00:00:04)
Validating cpi release… Finished (00:00:00)
Validating deployment manifest… Finished (00:00:00)
Validating stemcell… Finished (00:00:03)
Finished validating (00:00:12)

Started installing CPI
Compiling package ‘ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254’… Finished (00:00:00)
Compiling package ‘iso9660wrap/82cd03afdce1985db8c9d7dba5e5200bcc6b5aa8’… Finished (00:00:00)
Compiling package ‘vsphere_cpi/3049e51ead9d72268c1f6dfb5b471cbc7e2d6816’… Finished (00:00:00)
Installing packages… Finished (00:00:00)
Rendering job templates… Finished (00:00:01)
Installing job ‘vsphere_cpi’… Finished (00:00:00)
Finished installing CPI (00:00:02)

Starting registry… Finished (00:00:00)
Uploading stemcell ‘bosh-vsphere-esxi-ubuntu-trusty-go_agent/3541.12’… Skipped [Stemcell already uploaded] (00:00:00)

Started deploying
Creating VM for instance ‘bosh/0’ from stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’… Failed (00:00:02)
Failed deploying (00:00:02)

Stopping registry… Finished (00:00:00)
Cleaning up rendered CPI jobs… Finished (00:00:00)

Deploying:
Creating instance ‘bosh/0’:
Creating VM:
Creating vm with stemcell cid ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’:
CPI ‘create_vm’ method responded with error: CmdError{“type”:”Unknown”,”message”:”Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29‘”,”ok_to_retry”:false}

Exit code 1
===== 2018-05-10 20:29:31 UTC Finished “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”; Duration: 18s; Exit Status: 1
{“type”: “step_finished”, “id”: “bosh_product.deploying”}
Exited with 1.

I did end up resolving this by deleting the bosh-state.json file. It apparently held some erroneous setup info about the stem cells that was causing the setup process to try and use a stemcell it had not yet downloaded.

I was able to SSH into the PKS Operations Manager VM and run this to fix it:

sudo rm /var/tempest/workspaces/default/deployments/bosh-state.json

Then, I was able to re-run the deployment with success.

Installing CloudFoundry User Account and Authentication Client (cf-uaac) on Windows

I’m doing some playing round with Pivotal CloudFoundry and Kubernetes and ran into an issue where during the setup I needed to use their cf-uaac tool (written in Ruby) to complete the setup and manage authentication to the system.

There are a lot of instructions out there on how to do this on Linux and pretty much all of them assume you have an Internet connection. I found not only can you install this on Windows, but you can do so on a machine that does not have Internet access.

Below, I detail how to install cf-uaac on Ubuntu Linux and Windows both with and without an Internet connection.

Prerequisites for Either Installation Method

Whether or not you have Internet access on your target machine, you need to follow these steps to setup your machine to leverage the Ruby gems.

For Linux
# Build-essential is a prerequisite for a lot of Ruby gems.
apt install -y build-essential ruby ruby-dev
For Windows
  • Download ruby (with the devkit): https://rubyinstaller.org/downloads
  • Install MSYS2
    • The devkit installer will do this for you if your machine has Internet access.
    • Otherwise, the installer will run with errors and you have to manually install it afterwards from here.
  • Make sure c:\Rubyxx-x64\bin is in your PATH environment variable (where xx is the current Ruby version)

Installing cf-uaac From the Internet

This is pretty easy and detailed in a lot of other places on the Internet. For brevity, I included quick instructions here:

For Either Windows or Linux
gem install cf-uaac

Installing cf-uaac Without a Direct Internet Connection

This method assumes you have a workstation that has Internet access from which you can download the gems. Then, you can copy them to the target machine that you need to run uaac from.

CF-UAAC has a list of required gems (as of this writing):

rack-1.6.9.gem
highline-1.6.21.gem
cookiejar-0.3.3.gem
addressable-2.5.2.gem
launchy-2.4.3.gem
eventmachine-1.2.5.gem
em-http-request-1.1.5.gem
httpclient-2.8.3.gem
cf-uaac-4.1.0.gem
json_pure-1.8.6.gem
public_suffix-3.0.2.gem
em-socksify-0.3.2.gem
multi_json-1.12.2.gem
cf-uaa-lib-3.13.0.gem
http_parser.rb-0.6.0.gem

Note that cf-uaac doesn’t require (moreover, doesn’t allow) the latest versions of all of these plugins. You need to make sure you observe the version requirements as listed. For instance, the runtime dependencies for cf-uaac are currently:

uaac-requirements

You need em-http-request version >= 1.1.2 and < 1.2. For more info on pessimistic versioning and constraints in Ruby, see this article.

Download each gem by visiting its page on Rubygems.org and clicking the “Download” link on the page.

Once you have each gem (and each gem’s dependencies) downloaded, you can move the .gem files you downloaded to somewhere reachable by your target machine.

Installing On Linux
# Install a single gem:
gem install --local /tmp/mygem.gem

# Or install a series of gems from a directory:
for file in /tmp/PKS/*.gem; do gem install --local "$file"; done
Installing On Windows
# Install a single gem:
gem install --local c:\temp\mygem.gem

# Install a series of gems from a directory:
Get-Item -Path "c:\temp\PKS\*.gem" | Sort-Object -Property Name | Foreach-Object { gem install --local "$($_.FullName)" }

Once these steps are complete, the uaac binary should be added to the Ruby/bin (Windows) or /usr/loca/bin (Linux) path and can be executed by typing uaac from your console (PowerShell or Bash).

Most issues I had getting this working were because the prerequisites weren’t present. Make sure build-essential, ruby and ruby-dev are installed on Linux machines and that Ruby with the devkit and MSYS2 is installed on Windows machines.

With all of this done, I was able to manage my PKS UAA component from the CLI on my Windows and Linux machines.

Using the Puppet CA API From Windows

Puppet Enterprise exposes a number of RESTful APIs that can be used to help automate the solution and integrate it with other things. One need I’ve run into is the need to revoke and remove certificates from Puppet nodes in an automated fashion. My previous approach involved using SSH to connect to the Puppet Master server and run the puppet cert clean command, but I’m not a huge fan of that. With some effort, I found out how to talk to the API using Postman and PowerShell in a Windows environment. Postman was good for initial testing of the API, while I use PowerShell to fully automate solutions. I’ve outlined a step-by-step on how to set this up below:

Basics

The base URI for the puppet CA API is:

https://*puppet master server FQDN*:8140/puppet-ca/v1

The default port is 8140, which is configurable.

Authorization

Authorization and authentication were the most difficult parts for me to figure out. Unlike the other API endpoints in Puppet, you don’t use the normal token method. The CA API uses certificate authentication and authorization is granted based on the Subject Name of the certificate your client presents to the Puppet server. By default, the ONLY machine allowed to talk to the endpoint is your Puppet Master server itself, so without modification you can’t do much with the API.

You can change the authorization rules to allow other machines to connect. You can see the configuration for this in the /etc/puppetlabs/puppetserver/conf.d/auth.conf:

{
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate/",
"query-params": {},
"type": "path"
},
"name": "puppetlabs certificate",
"sort-order": 500
},
{
"allow": [
"puppetmaster.domain.com"
],
"match-request": {
"method": [
"get",
"put",
"delete"
],
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
},
"name": "puppetlabs certificate status",
"sort-order": 500
},
{
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate_revocation_list/ca",
"query-params": {},
"type": "path"
},
"name": "puppetlabs crl",
"sort-order": 500
},
{
"allow-unauthenticated": true,
"match-request": {
"method": [
"get",
"put"
],
"path": "/puppet-ca/v1/certificate_request",
"query-params": {},
"type": "path"
},
"name": "puppetlabs csr",
"sort-order": 500
}

You’ll see an array of rules defined in this file, each one granting access to particular API endpoints. In this case, I’m most concerned with the certificate endpoints shown above. (For details on the layout of this file, see Puppet’s Docs here)

The endpoint rules that specify “allow-unauthenticated” are freely-accessible without authentication, so most of this article doesn’t apply to them. Just make a call from Postman or Curl like normal.

However, the certificate_status endpoint has an “allow” property, which lists all of the nodes that are allowed to access the endpoint. By default, it appears the name of your Puppet Master server appears here.

Normally, you could probably add entries to this list, restart your Puppet Master services, and go. The issue is this file is actually managed by Puppet, and your changes would be overwritten the next time the Puppet agent runs.

This setting is actually governed by the puppet_enterprise::profile::certificate_authority::client_whitelist setting. This can be set a couple of ways. The first way is to log into the Puppet Master GUI and do the following:

  1. Go to Inventory and select your Puppet Master server
  2. Select the “Groups” tab and click the PE Certificate Authority Group
  3. Click the “Classes” tab
  4. Set the client_whitelist parameter under puppet_enterprise::profile::certificate_authority

certificate_authorityNormally, this would work, but when the Puppet agent runs you might get the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Class[Puppet_enterprise::Profile::Master] is already declared; cannot redeclare on node

The workaround I found in a Q/A article suggested to just add the setting to your common.yaml and have Hiera set the setting instead. This worked well for me. My common.yaml file looks like this:

# Allows the listed machines to communicate with the puppet-ca API:
puppet_enterprise::profile::certificate_authority::client_whitelist:
– server1.mydomain.com
– server2.mydomain.com

Once this was pushed to the Puppet Master server, I did a Puppet agent run using puppet agent -t from the server and it applied the settings. Checking auth.conf again, I now see this:

{
"allow": [
"puppetmaster.domain.com",
"server1.domain.com",
"server2.domain.com"
],
"match-request": {
"method": [
"get",
"put",
"delete"
],
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
},

Now that my servers are authorized to access the API, I can make calls using a client certificate to authenticate to the API.

Authentication

The next section shows you how to setup Postman and PowerShell to authenticate to the API. If you setup your authorization correctly as shown above, you should be able to hit the APIs.

Using Postman

To use Client Cert authentication to the Puppet API, you can setup Postman using the following method

Import the cert into Postman:

  1. Click Settings in Postman
  2. Go to Certificates
  3. Click the “Add Certificate link”
  4. Add the cert using the following settings
    • Host – Specify the FQDN of the host you want to present the cert to. Don’t specify any of the URI path, just the FQDN and port.
    • CRT File – Use the PEM file in the certs directory
    • KEY File – Use the PEM file in the private_keys directory
    • NO passphrase

Postman_client_cert

Once that is done, you can issue a GET command to a URI like this and get a response:

https://puppetmasterserver.domain.com:8140/puppet-ca/v1/certificate_statuses/key

The “key” portion of the URI is required, but the word “key” is arbitrary. I think you can pretty much type anything you want there.

This yields a response much like the following:

cert_statuses

If you get a “Forbidden” error, you either have the URI slightly wrong or you don’t have the authorization correct. The array of names in the “allow” section of the API rule MUST match the Subject Name of the certificate.

Using PowerShell

To get this to work with PowerShell, you have to export your Puppet certs as a PFX and reference them in a Invoke-RestMethod call.

To create a PFX from the certs, do the following:

  1. Install Openssl
      • If you have Git for Windows installed, you already have this. Just change to c:\program files\Git\usr\bin
  2. Run the following
C:\Program Files\Git\usr\bin\openssl.exe pkcs12 -export -out "c:\temp\server1.domain.com.pfx" -inkey "C:\ProgramData\PuppetLabs\puppet\etc\ssl\private_keys\server1.domain.com.pem" -in "C:\ProgramData\PuppetLabs\puppet\etc\ssl\certs\server1.domain.com.pem"

Don’t specify a export password.

Once that is done, call the following Cmdlet:

Invoke-RestMethod -Uri "https://puppetmaster.domain.com:8140/puppet-ca/v1/certificate_statuses/key" -Certificate (Get-PfxCertificate -FilePath C:\temp\server1.domain.com.pfx) -Headers @{"Content-Type" = "application/json" }

Viola! That’s it.

References

Puppet File Sync Not Working – LOCK_FAILURE

I had a recent issue where Puppet was not properly syncing code from the code-staging directory to the code directory.  I verified it was pulling the new code from my Git repository to code-staging without issue.  However, file-sync was not pushing the new code to the code directory.

Here is what I was seeing in the /var/log/puppetlabs/puppetserver/puppetserver.log

2017-06-26 11:08:49,026 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:54,051 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:59,077 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:04,103 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:09,129 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:14,155 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.

I had no idea what this meant, and I wasn’t sure how to resolve it so I took a snapshot of my Puppet Master VM and tried a few things.

The first thing I tried was going to the directory indicated and taking a look:

ll /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code/production.git/
total 44
drwxr-xr-x 7 pe-puppet pe-puppet 4096 Jun 26 11:03 ./
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 ../
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 branches/
-rw-r—– 1 pe-puppet pe-puppet 307 Jun 26 11:03 config
-rw-r—– 1 pe-puppet pe-puppet 148 Jun 26 10:28 FETCH_HEAD
-rw-r–r– 1 pe-puppet pe-puppet 23 Apr 25 2016 HEAD
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 hooks/
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 logs/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Jun 26 10:28 objects/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Apr 25 2016 refs/
-rw-r—– 1 pe-puppet pe-puppet 41 Jun 26 10:28 synced-commit

/opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git/production.git had the same contents but for one file:

synced-commit.lock

I wasn’t sure this file belonged there, so I  removed it.  Once I did that, the file-sync service stopped throwing errors and successfully synced my files!

Hope this helps!

Disabling SSL Certificate Validation with PowerShell

I’ve run into this issue about a billion times.  Mostly, I see it when I’m coding against a web API on a device with a bad or partially-valid self-signed cert.

I’ve seen several articles on how to disable the SSL validation check, but have had only limited success with them.  I finally found an approach out there that works for all of my use cases, and wrapped a nice function around it.  I’m publishing it here in hopes it helps people out someday.

Basically, call this either to enable or disable SSL certificate validation.  It is safe to run multiple times in the same session and doesn’t throw any errors.

Here it is:

Automating MAK Proxy Activation with PowerShell

I ran into a need recently where I had to activate Windows on new machines in an automated fashion.  The issue was that the environment did not use KMS, but instead activated new machines using a MAK key.  The machines being activated did not have Internet access, so they had to be activated via proxy.

There is a great article on how to do this using the Volume Activation Management Tool (VAMT) here.  Basically, enable Internet access (or at least access to the MS Activation servers) to a machine with the VAMT installed and you can use the GUI to activate it.  If you need to automate it, you can see instructions on the PowerShell commands for VAMT here.

This all works very well, but not complete for my needs.  I needed have a different server other than the VAMT server initiate the activation.  To do this, I wrapped the VAMT commands I needed in a PowerShell function detailed further below.  With this function, you can have any server issue the commands to the VAMT server to add and activate multiple severs on your network in an automated fashion.

I found one big caveat though.  You need to enable Kerberos Delegation for BOTH the VAMT server and the server running this function.  This is done by issuing the command below in PowerShell:

Set-AdComputer -Identity computerName -TrustedForDelegation $true 

The reason for this is the server running this function must pass the credentials of the user running it to the VAMT cmdlets so they can run.  In turn, the Find-VamtManagedMachine cmdlet must also pass those credentials to Active Directory to look the machine up.  If you forget to do this, you will get errors.

Here is the function:

 

Hopefully, this is of use to others.

Encrypting Credentials In PowerShell Scripts

I have a long-standing dislike of hard-coding credentials in scripts.  In a production environment, it’s never a good idea to leave sensitive account passwords hard-coded in plain text in scripts.  To that end, I’ve developed an easy method in PowerShell to protect sensitive information.

The functions I present below allow you to store usernames and passwords, where the passwords are encrypted, in a form that can be later decrypted inside a script.  By default, only the user account that encrypted the credentials can decrypt them, and only from that same machine.  It all uses native .NET stuff, so you don’t need any third-party stuff to get it working.

Where I find this most useful is for services or scheduled tasks that run as system accounts that execute PowerShell scripts.  You can log into the machine as that service account, encrypt a set of credentials, then when that scheduled task runs as that service account it is able to read them.

Using the export function I show below, you can either export your credentials to an xml file on the file system, or a registry value in the Windows registry.

Here is an example:

First, save the credential to a variable and export it to an xml file:

$cred = Get-Credential username
$cred | Export-PSCredential -Path c:\temp\creds.xml

This outputs the path to the xml file you created with the encrypted credentials:

Export-PSCredential

Alternately, you can export to a registry key instead:

$cred = Get-Credential username
$cred | Export-PSCredential -RegistryPath HKCU:\software\test -Name mycreds

In the registry, you can see your exported credentials:

Export-Registry

The major thing that needs to be understood about this is the encryption key that is used to encrypt these credentials is tied to both the userid used to encrypt them AND the machine you encrypted from.  Unless you specify a keyphrase, you cannot decrypt these credentials as another user or from another machine.  The idea is if you have a script that reads these encrypted credentials, you have to log in as the user the script runs as on the machine the script runs from and encrypt them.  However, as described above, if you provide a keyphrase, you can decrypt them from anywhere as any user.  You just have to somehow protect the keyphrase.

Importing the credentials again is pretty simple:

$cred = Import-PSCredential -Path C:\temp\creds.xml
# OR
$cred = Import-PSCredential -RegistryPath HKCU:\Software\test -Name mycreds

Import-PSCredential

Specifying a keyphrase involves specifying the -KeyPhrase parameter on either the import or export function.

Below is the code.  Simply paste these three functions into your PowerShell session or into your script and away you go.

Note the Get-EncryptionKey function is required for both the import and export functions!

vRealize Orchestrator HTTP-REST – Cannot execute the request; Read timed out

I recently stumbled upon an issue with the HTTP-REST plugin in VRO that took some experimentation to understand.  For some reason, I kept getting a “Read timed out” error when my workflows would make a REST call that took more than 60 seconds to return a response.  There is an operationTimeout property you can set to govern this, but I found it is ignored under certain circumstances. It’s very confusing since you can examine the opeationTimeout property and it *appears* correct. I had to do quite a bit of testing to get to the bottom of the behavior.

In my implementation, I was using VRO 7.2 and transient HTTP REST host objects to do my REST calls. I favored that approach over using the HTTP-REST configuration workflows to add and remove every host and combination of operations I could some day invent. This approach seemed somewhat inflexible.

Here is my basic testing workflow:

Test Workflow

Here is the code in the script:

//  Username, password, and useTransientHost are input parameters.

var uri = "https://myrestapihost.domain.com/api/DoSomething/id44";
var method = "GET";
var body = "";  // For POST/PUT body content.  This has to be a JSON string. E.g.  body = "{ 'p1' : 1, 'p2' : 2 }";
var httpRestHost = null;

if ( useTransientHost )
{
  System.log("Using Transient host.");
  //  Create a dynamic REST host:

  var restHost = RESTHostManager.createHost("dynamicRequest");
  restHost.operationTimeout = 900;  //  This gets ignored!!!
  httpRestHost = RESTHostManager.createTransientHostFrom(restHost);
  httpRestHost.operationTimeout = 900;  //  Set it here too, just to be really really sure.
}
else
{
  System.log("Using NON-Transient host.");
  httpRestHost = RESTHostManager.getHost("71998784-d590-426d-8945-75ec0b1ad7b4");		//  Use the ID For your HTTP-REST host here
  httpRestHost.operationTimeout = 900;  //  This gets ignored!!!
}

System.log("OperationTimeout  is set to: " + httpRestHost.operationTimeout.toString());

//  Create the authentication:
var authParams = ['Shared Session', userName, password];
var authenticationObject = RESTAuthenticationManager.createAuthentication('Basic', authParams);
httpRestHost.authentication = authenticationObject;

//  Remove the endpoint from the URI:
var urlEndpointSplit = uri.split("/");
var urlEndpoint = urlEndpointSplit[urlEndpointSplit.length - 1];
uri = uri.split(urlEndpoint)[0];

httpRestHost.url = uri;

//  REST client only accepts method in all UPPER CASE:
method = method.toUpperCase();

var request = httpRestHost.createRequest(method, urlEndpoint, body);
request.contentType = "application/json";

System.debug("REST request to URI: " + method + " " + request.fullUrl);

var response = request.execute();   //  This should have a 90-second timeout
System.debug("Response status Code: " + response.statusCode);

if ( response.contentAsString )
{
  System.debug("Response: " + response.contentAsString);
}

I added three input parameters:

  • userName
  • password
  • useTransientHost

When I call it using a transient host, it always times out in 60s, no matter what I set the operationTimeout setting to. Here is the output from the run:

[2017-03-28 11:51:55.740] [I] Using Transient host.
[2017-03-28 11:51:55.747] [I] OperationTimeout is set to: 900
[2017-03-28 11:51:55.751] [D] REST request to URI: GET https://myrestapihost.domain.com/api/DoSomething/id44
[2017-03-28 11:52:55.902] [E] Error in (Workflow:Example REST API Call / HTTP Rest Call (item1)#43) Cannot execute the request: ; Read timed out

You can see the 60s timeout despite the fact the operationTimeout property was set to 900.

I ran it again and referenced a non-transient HTTP host:

2017-03-28 12:00:24.618] [I] Using NON-Transient host.
[2017-03-28 12:00:24.629] [I] OperationTimeout is set to: 900
[2017-03-28 12:00:24.634] [D] REST request to URI: GET https://myrestapihost.domain.com/api/DoSomething/id44
[2017-03-28 12:02:24.740] [E] Error in (Workflow:Example REST API Call / HTTP Rest Call (item1)#46) Cannot execute the request: ; Read timed out

In this case, it timed out in 120 seconds, not 60 (or 900). I found 120 came from what I entered in for operationTimeout when I created the host using the HTTP-REST/Configuration/Add a REST host workflow:

Test Host Settings

So, in the end, the following appears to be true:

  • OperationTimeout defaults to 60 seconds.
  • Though it appears you can, you CANNOT override by setting the operationTimeout property in code (this should be a read-only property if that is the case)
  • It instead uses the operationTimeout set on the HTTP-REST host object when you create (or update) it using the configuration workflows.
  • Transient hosts are always at 60s timeouts. No way to override this.

However:

You CAN override just about everything else, including URI and authentication.

This means I can get around this by adding a dummy HTTP-REST host as per normal using the Add a REST host workflow:

TestHost1TestHost2TestHost3

The URL, Authentication and other settings do not matter, they can be overridden in your code as I did above in my example. The ONLY setting that matters is the operationTimeout (and perhaps the connectionTimeout, which stands to reason may have the same issue, but I never tested it).

Then reference the host ID as I did in the code above and override the URL, authentication, and whatever else you need to.

I’ve engaged VMware tech support to log this as a bug. I really think the operationTimeout should be settable or read-only.  We’ll see where that goes…

UPDATE 4/6/2017 – I just got final word back from VMware tech support engineering.  The behavior I noted above is normal behavior, and the workaround I proposed is the accepted workaround.  Nothing to see here…

I did ask for a feature request to either have the operationTimeout property be programmatically changeable or to be set as read-only to reduce confusion.