(Tough) Lessons learned from integrating Docker, ZAP-CLI, and Jenkins

I learned a great deal — both about technology and approaches in using it — while I worked through last quarter’s goal of getting a Dockerized OWASP-ZAP scanning instance stood up in Jenkins, and running against a live server.

Both for the sake of the original two blog posts’ lengths, as well as meeting my Q2 goal (and give myself a much-needed breather), I decided to collect my thoughts along the way, and blog about them.

So now I’d like to share those lessons learned from my first milestone of getting the ZAP-CLI running in Docker, to the end goal of having the Dockerized ZAP-CLI running inside Jenkins, scanning against a live website.

The Good:

Open, public docker-zap GitHub repo which allowed me to:

  • file and address Issues along the way, both for my own progress, as well as for visibility (thanks, Grunny! I’ll be circling back soon, promise!)
  • use source control not only for myself, but to hopefully attract and encourage collaboration (patches)
  • quickly integrate with Jenkins, when the time came

Blogged the step-by-step process of getting up and running: part one and part two. This turned out to be more useful for myself than even I imagined:

  • recreating the instructions step-by-step, many times, helped me both get a deeper understanding, as well as provided me with a handy reference to check against, towards the end, as I made simple mistakes

NOW, THE BAD:

  • Thinking it to be overwhelming, I didn’t start trying Docker with ZAP-CLI in Jenkins, on the target (RedHat Enterprise Linux) system. Instead, I learned enough Docker to get the ZAP-CLI going on my OS X laptop, but had to (re)learn quite a bit of Linux (OS-level architectural differences + commands [sudo vs. su, among them] + Docker-behavior changes), once I got to the integration phase
  • Huge platform differences:
    1. Getting the Docker Daemon and Remote/REST API (not the containers themselves, as much) up and running — securely — is very different on Linux and Mac
    2. SELinux was enabled by default on Linux (Fedora)
  • I spent a lot of time reading about (and playing with a few of) the various Docker plugins for Jenkins, none of which I ended up using
    1. The simplest plugin, Docker Build Step Plugin, doesn’t support (because the Docker Remote API itself doesn’t) the docker run command
    2. I didn’t get my integrated work in a publicly-hosted instance soon enough. This made it much more difficult for others to contextualize issues I was facing, and help make or suggest changes, live. Instead, we had to:
      1. Share code snippets
      2. Ask and try things like “What does cat /var/run/docker.socket yield?”
      3. Share screenshots
    3. Red Herrings
      1. Hard to understand which user (jenkins vs. docker, primarily) was running which command
      2. Due to the wording on the “Configuration” portion of the plugin’s page which mentioned slaves, I didn’t bother to specify the Docker REST API URL in the Jenkins global config (didn’t think it needed it)
        1. Filed an INVALID Issue about that: https://issues.jenkins-ci.org/browse/JENKINS-35322

FINALLY, THE UGLY:

The most basic thing I should have done upfront was to ensure that Docker would work on the intended RHEL-6 based Jenkins box, but Docker isn’t supported on RHEL6…

  • This shouldn’t have happened; I should have known to explicitly search for two basic terms, “RHEL6 Docker” and saved myself much lost time and shame

Good lessons learned:

  • Get as close to your target system as early as possible
  • Get a public repo out there, if you can
  • Just as important (and potentially even moreso), get an accessibe-to-others instance up and running as soon as possible. It’ll also help inform you as to whether approaches should change, which can save a lot of time and headache down the road, instead of trying to retrofit and reconcile two or more disparate technologies towards the end

Especially when “just trying to get it working,” a huge lesson well-learned is: I spent too much time just trying to “get the plugin to work,” rather than understanding and even trying to get things up and running without it (which ended up both being cleaner, code-wise, as well as helped me learn what I was actually setting up). Specifically, if — without any plugins, you can –:

  • expose the API and its functionality you need
  • on the host you need
  • so others can access it and wrap/work on their own workflows, using the above

So I’m not misunderstood: plugins are fine, but make sure you understand which problem(s) they solve that you — and potentially your peers — can’t solve. And, furthermore, that ages-old KISS principle is (still) there for a good reason!

No comments yet

Post a comment

Leave a Reply

Your email address will not be published. Required fields are marked *