Gitlab – Webhooks stop working – Internal Error 500

After my latest update to Gitlab (to version 10.7.3 ee), I found my webhooks on several of my repositories stopped working.  When I’d manually test them from the Gitlab Interface, I got a generic 500 error:

Gitlab 500

I figured this had something to do with the recent update, so I dug into the logs.  I found the issue in /var/log/gitlab/gitlab-rails/production.log:

Gitlab::HTTP::BlockedUrlError (URL ‘’ is blocked: Requests to the local network
are not allowed):
lib/gitlab/proxy_http_connection_adapter.rb:17:in `rescue in connection’
lib/gitlab/proxy_http_connection_adapter.rb:14:in `connection’
app/services/web_hook_service.rb:73:in `make_request’
app/services/web_hook_service.rb:26:in `execute’
app/models/hooks/web_hook.rb:10:in `execute’
app/services/test_hooks/base_service.rb:22:in `block in execute’
app/services/test_hooks/base_service.rb:19:in `catch’
app/services/test_hooks/base_service.rb:19:in `execute’
app/controllers/projects/hooks_controller.rb:41:in `test’
lib/gitlab/i18n.rb:50:in `with_locale’
lib/gitlab/i18n.rb:56:in `with_user_locale’
app/controllers/application_controller.rb:334:in `set_locale’
lib/gitlab/middleware/multipart.rb:95:in `call’
lib/gitlab/request_profiler/middleware.rb:14:in `call’
ee/lib/gitlab/jira/middleware.rb:15:in `call’
lib/gitlab/middleware/go.rb:17:in `call’
lib/gitlab/etag_caching/middleware.rb:11:in `call’
lib/gitlab/middleware/read_only/controller.rb:28:in `call’
lib/gitlab/middleware/read_only.rb:16:in `call’
lib/gitlab/request_context.rb:18:in `call’
lib/gitlab/metrics/requests_rack_middleware.rb:27:in `call’
lib/gitlab/middleware/release_env.rb:10:in `call’

The whole “requests to the local network are not allowed” thing was new to me, so I found this:

The comments at the bottom showed me the way.  THere is a new setting you have to enable:

  1. Log into Gitlab
  2. Go to the admin area
  3. Go to Settings
  4. Go to Outbound Requests
  5. Click the “Allow requests to the local network from hooks and services” button.
  6. Save the changes

Allow Outbound


Viola!  It works now.

Setting Up a vSphere Service Account for Pivotal BOSH Director using PowerCLI

BOSH Director requires a fairly powerful vCenter service account to do all of the things it does.

The list of permissions required is here, and it’s extensive.

You can always take the shortcut and make your account an Administrator of the vSphere environment, but that violates the whole “least privilege” principle and I don’t like that in production environments.

I wrote a working PowerCLI code function that will automatically create this vCenter role and add the specified user/group to it.

It greatly reduces the time to set this up.  Hope this helps someone out.


Pivotal BOSH Director Setup Error – Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I was trying to install Pivotal Kubernetes Services on vSphere.  I setup the inital Operations Manger .ova appliance without issue.  Since I was deploying on vSphere, I needed to configure the BOSH Director installation through the vSphere tile next.  I ran through the configuration and tried to deploy once.. and it failed.  I tried again, and was dead-stopped at the above error over and over again.  I believe this came up because I deleted the BOSH/0 VM and tried to have the installer run again.

When in this state, it continually fails with the following error:

Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’

I had no idea what that meant, so I found this on the tech support site:

Same error, but I didn’t have BOSH director even setup yet so it didn’t apply.

The full log readout is below:

{“type”: “step_started”, “id”: “bosh_product.deploying”}
===== 2018-05-10 20:29:13 UTC Running “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”
Deployment manifest: ‘/var/tempest/workspaces/default/deployments/bosh.yml’
Deployment state: ‘/var/tempest/workspaces/default/deployments/bosh-state.json’

Started validating
Validating release ‘bosh’… Finished (00:00:00)
Validating release ‘bosh-vsphere-cpi’… Finished (00:00:00)
Validating release ‘uaa’… Finished (00:00:01)
Validating release ‘credhub’… Finished (00:00:00)
Validating release ‘bosh-system-metrics-server’… Finished (00:00:01)
Validating release ‘os-conf’… Finished (00:00:00)
Validating release ‘backup-and-restore-sdk’… Finished (00:00:04)
Validating cpi release… Finished (00:00:00)
Validating deployment manifest… Finished (00:00:00)
Validating stemcell… Finished (00:00:03)
Finished validating (00:00:12)

Started installing CPI
Compiling package ‘ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254’… Finished (00:00:00)
Compiling package ‘iso9660wrap/82cd03afdce1985db8c9d7dba5e5200bcc6b5aa8’… Finished (00:00:00)
Compiling package ‘vsphere_cpi/3049e51ead9d72268c1f6dfb5b471cbc7e2d6816’… Finished (00:00:00)
Installing packages… Finished (00:00:00)
Rendering job templates… Finished (00:00:01)
Installing job ‘vsphere_cpi’… Finished (00:00:00)
Finished installing CPI (00:00:02)

Starting registry… Finished (00:00:00)
Uploading stemcell ‘bosh-vsphere-esxi-ubuntu-trusty-go_agent/3541.12’… Skipped [Stemcell already uploaded] (00:00:00)

Started deploying
Creating VM for instance ‘bosh/0’ from stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’… Failed (00:00:02)
Failed deploying (00:00:02)

Stopping registry… Finished (00:00:00)
Cleaning up rendered CPI jobs… Finished (00:00:00)

Creating instance ‘bosh/0’:
Creating VM:
Creating vm with stemcell cid ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29’:
CPI ‘create_vm’ method responded with error: CmdError{“type”:”Unknown”,”message”:”Could not find VM for stemcell ‘sc-b0131c8f-ef44-456b-8e7c-df3951236d29‘”,”ok_to_retry”:false}

Exit code 1
===== 2018-05-10 20:29:31 UTC Finished “/usr/local/bin/bosh –no-color –non-interactive –tty create-env /var/tempest/workspaces/default/deployments/bosh.yml”; Duration: 18s; Exit Status: 1
{“type”: “step_finished”, “id”: “bosh_product.deploying”}
Exited with 1.

I did end up resolving this by deleting the bosh-state.json file. It apparently held some erroneous setup info about the stem cells that was causing the setup process to try and use a stemcell it had not yet downloaded.

I was able to SSH into the PKS Operations Manager VM and run this to fix it:

sudo rm /var/tempest/workspaces/default/deployments/bosh-state.json

Then, I was able to re-run the deployment with success.

Installing CloudFoundry User Account and Authentication Client (cf-uaac) on Windows

I’m doing some playing round with Pivotal CloudFoundry and Kubernetes and ran into an issue where during the setup I needed to use their cf-uaac tool (written in Ruby) to complete the setup and manage authentication to the system.

There are a lot of instructions out there on how to do this on Linux and pretty much all of them assume you have an Internet connection. I found not only can you install this on Windows, but you can do so on a machine that does not have Internet access.

Below, I detail how to install cf-uaac on Ubuntu Linux and Windows both with and without an Internet connection.

Prerequisites for Either Installation Method

Whether or not you have Internet access on your target machine, you need to follow these steps to setup your machine to leverage the Ruby gems.

For Linux
# Build-essential is a prerequisite for a lot of Ruby gems.
apt install -y build-essential ruby ruby-dev
For Windows
  • Download ruby (with the devkit):
  • Install MSYS2
    • The devkit installer will do this for you if your machine has Internet access.
    • Otherwise, the installer will run with errors and you have to manually install it afterwards from here.
  • Make sure c:\Rubyxx-x64\bin is in your PATH environment variable (where xx is the current Ruby version)

Installing cf-uaac From the Internet

This is pretty easy and detailed in a lot of other places on the Internet. For brevity, I included quick instructions here:

For Either Windows or Linux
gem install cf-uaac

Installing cf-uaac Without a Direct Internet Connection

This method assumes you have a workstation that has Internet access from which you can download the gems. Then, you can copy them to the target machine that you need to run uaac from.

CF-UAAC has a list of required gems (as of this writing):


Note that cf-uaac doesn’t require (moreover, doesn’t allow) the latest versions of all of these plugins. You need to make sure you observe the version requirements as listed. For instance, the runtime dependencies for cf-uaac are currently:


You need em-http-request version >= 1.1.2 and < 1.2. For more info on pessimistic versioning and constraints in Ruby, see this article.

Download each gem by visiting its page on and clicking the “Download” link on the page.

Once you have each gem (and each gem’s dependencies) downloaded, you can move the .gem files you downloaded to somewhere reachable by your target machine.

Installing On Linux
# Install a single gem:
gem install --local /tmp/mygem.gem

# Or install a series of gems from a directory:
for file in /tmp/PKS/*.gem; do gem install --local "$file"; done
Installing On Windows
# Install a single gem:
gem install --local c:\temp\mygem.gem

# Install a series of gems from a directory:
Get-Item -Path "c:\temp\PKS\*.gem" | Sort-Object -Property Name | Foreach-Object { gem install --local "$($_.FullName)" }

Once these steps are complete, the uaac binary should be added to the Ruby/bin (Windows) or /usr/loca/bin (Linux) path and can be executed by typing uaac from your console (PowerShell or Bash).

Most issues I had getting this working were because the prerequisites weren’t present. Make sure build-essential, ruby and ruby-dev are installed on Linux machines and that Ruby with the devkit and MSYS2 is installed on Windows machines.

With all of this done, I was able to manage my PKS UAA component from the CLI on my Windows and Linux machines.

Using the Puppet CA API From Windows

Puppet Enterprise exposes a number of RESTful APIs that can be used to help automate the solution and integrate it with other things. One need I’ve run into is the need to revoke and remove certificates from Puppet nodes in an automated fashion. My previous approach involved using SSH to connect to the Puppet Master server and run the puppet cert clean command, but I’m not a huge fan of that. With some effort, I found out how to talk to the API using Postman and PowerShell in a Windows environment. Postman was good for initial testing of the API, while I use PowerShell to fully automate solutions. I’ve outlined a step-by-step on how to set this up below:


The base URI for the puppet CA API is:

https://*puppet master server FQDN*:8140/puppet-ca/v1

The default port is 8140, which is configurable.


Authorization and authentication were the most difficult parts for me to figure out. Unlike the other API endpoints in Puppet, you don’t use the normal token method. The CA API uses certificate authentication and authorization is granted based on the Subject Name of the certificate your client presents to the Puppet server. By default, the ONLY machine allowed to talk to the endpoint is your Puppet Master server itself, so without modification you can’t do much with the API.

You can change the authorization rules to allow other machines to connect. You can see the configuration for this in the /etc/puppetlabs/puppetserver/conf.d/auth.conf:

"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate/",
"query-params": {},
"type": "path"
"name": "puppetlabs certificate",
"sort-order": 500
"allow": [
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"
"name": "puppetlabs certificate status",
"sort-order": 500
"allow-unauthenticated": true,
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate_revocation_list/ca",
"query-params": {},
"type": "path"
"name": "puppetlabs crl",
"sort-order": 500
"allow-unauthenticated": true,
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_request",
"query-params": {},
"type": "path"
"name": "puppetlabs csr",
"sort-order": 500

You’ll see an array of rules defined in this file, each one granting access to particular API endpoints. In this case, I’m most concerned with the certificate endpoints shown above. (For details on the layout of this file, see Puppet’s Docs here)

The endpoint rules that specify “allow-unauthenticated” are freely-accessible without authentication, so most of this article doesn’t apply to them. Just make a call from Postman or Curl like normal.

However, the certificate_status endpoint has an “allow” property, which lists all of the nodes that are allowed to access the endpoint. By default, it appears the name of your Puppet Master server appears here.

Normally, you could probably add entries to this list, restart your Puppet Master services, and go. The issue is this file is actually managed by Puppet, and your changes would be overwritten the next time the Puppet agent runs.

This setting is actually governed by the puppet_enterprise::profile::certificate_authority::client_whitelist setting. This can be set a couple of ways. The first way is to log into the Puppet Master GUI and do the following:

  1. Go to Inventory and select your Puppet Master server
  2. Select the “Groups” tab and click the PE Certificate Authority Group
  3. Click the “Classes” tab
  4. Set the client_whitelist parameter under puppet_enterprise::profile::certificate_authority

certificate_authorityNormally, this would work, but when the Puppet agent runs you might get the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Class[Puppet_enterprise::Profile::Master] is already declared; cannot redeclare on node

The workaround I found in a Q/A article suggested to just add the setting to your common.yaml and have Hiera set the setting instead. This worked well for me. My common.yaml file looks like this:

# Allows the listed machines to communicate with the puppet-ca API:

Once this was pushed to the Puppet Master server, I did a Puppet agent run using puppet agent -t from the server and it applied the settings. Checking auth.conf again, I now see this:

"allow": [
"match-request": {
"method": [
"path": "/puppet-ca/v1/certificate_status",
"query-params": {},
"type": "path"

Now that my servers are authorized to access the API, I can make calls using a client certificate to authenticate to the API.


The next section shows you how to setup Postman and PowerShell to authenticate to the API. If you setup your authorization correctly as shown above, you should be able to hit the APIs.

Using Postman

To use Client Cert authentication to the Puppet API, you can setup Postman using the following method

Import the cert into Postman:

  1. Click Settings in Postman
  2. Go to Certificates
  3. Click the “Add Certificate link”
  4. Add the cert using the following settings
    • Host – Specify the FQDN of the host you want to present the cert to. Don’t specify any of the URI path, just the FQDN and port.
    • CRT File – Use the PEM file in the certs directory
    • KEY File – Use the PEM file in the private_keys directory
    • NO passphrase


Once that is done, you can issue a GET command to a URI like this and get a response:

The “key” portion of the URI is required, but the word “key” is arbitrary. I think you can pretty much type anything you want there.

This yields a response much like the following:


If you get a “Forbidden” error, you either have the URI slightly wrong or you don’t have the authorization correct. The array of names in the “allow” section of the API rule MUST match the Subject Name of the certificate.

Using PowerShell

To get this to work with PowerShell, you have to export your Puppet certs as a PFX and reference them in a Invoke-RestMethod call.

To create a PFX from the certs, do the following:

  1. Install Openssl
      • If you have Git for Windows installed, you already have this. Just change to c:\program files\Git\usr\bin
  2. Run the following
C:\Program Files\Git\usr\bin\openssl.exe pkcs12 -export -out "c:\temp\" -inkey "C:\ProgramData\PuppetLabs\puppet\etc\ssl\private_keys\" -in "C:\ProgramData\PuppetLabs\puppet\etc\ssl\certs\"

Don’t specify a export password.

Once that is done, call the following Cmdlet:

Invoke-RestMethod -Uri "" -Certificate (Get-PfxCertificate -FilePath C:\temp\ -Headers @{"Content-Type" = "application/json" }

Viola! That’s it.


Puppet File Sync Not Working – LOCK_FAILURE

I had a recent issue where Puppet was not properly syncing code from the code-staging directory to the code directory.  I verified it was pulling the new code from my Git repository to code-staging without issue.  However, file-sync was not pushing the new code to the code directory.

Here is what I was seeing in the /var/log/puppetlabs/puppetserver/puppetserver.log

2017-06-26 11:08:49,026 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:54,051 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:08:59,077 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:04,103 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:09,129 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.
2017-06-26 11:09:14,155 ERROR [clojure-agent-send-off-pool-3] [p.e.file-sync-errors] Error syncing repo :puppet-code: File sync successfully fetched from the server repo, but update-ref result was LOCK_FAILURE on 8c346001ee2f834a4be05d3d9788d2d712b212c5. Name: puppet-code. Directory: /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git.

I had no idea what this meant, and I wasn’t sure how to resolve it so I took a snapshot of my Puppet Master VM and tried a few things.

The first thing I tried was going to the directory indicated and taking a look:

ll /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code/production.git/
total 44
drwxr-xr-x 7 pe-puppet pe-puppet 4096 Jun 26 11:03 ./
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 ../
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 branches/
-rw-r—– 1 pe-puppet pe-puppet 307 Jun 26 11:03 config
-rw-r—– 1 pe-puppet pe-puppet 148 Jun 26 10:28 FETCH_HEAD
-rw-r–r– 1 pe-puppet pe-puppet 23 Apr 25 2016 HEAD
drwxr-xr-x 2 pe-puppet pe-puppet 4096 Apr 25 2016 hooks/
drwxr-xr-x 3 pe-puppet pe-puppet 4096 Apr 25 2016 logs/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Jun 26 10:28 objects/
drwxr-xr-x 4 pe-puppet pe-puppet 4096 Apr 25 2016 refs/
-rw-r—– 1 pe-puppet pe-puppet 41 Jun 26 10:28 synced-commit

/opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git/production.git had the same contents but for one file:


I wasn’t sure this file belonged there, so I  removed it.  Once I did that, the file-sync service stopped throwing errors and successfully synced my files!

Hope this helps!

Disabling SSL Certificate Validation with PowerShell

I’ve run into this issue about a billion times.  Mostly, I see it when I’m coding against a web API on a device with a bad or partially-valid self-signed cert.

I’ve seen several articles on how to disable the SSL validation check, but have had only limited success with them.  I finally found an approach out there that works for all of my use cases, and wrapped a nice function around it.  I’m publishing it here in hopes it helps people out someday.

Basically, call this either to enable or disable SSL certificate validation.  It is safe to run multiple times in the same session and doesn’t throw any errors.

Here it is: