5am on the road, and the streets are empty – you get from point A to point B rapidly, as does everyone else awake at that hour. Several hours later there are many more cars on the road; everyone gets where they are going much slower. It’s not that there isn’t enough room in the lanes for the cars, it’s that every has to start and stop, reacting to the car in front of them, not just the lights. These interruptions, in essence, are the problem with high levels of concurrent work.

Handling concurrent issues and the associated interruptions have been shown to negatively impact productivity – research shows that it raises error ratesstress levels, and that each task has a context switching overhead (23+ mins to get back to full productivity in one study, 40% of overall productivity in another). We often don’t realise this because knowledge work is invisible. You can spot work piling up in a bottleneck like a traffic jam, but if work instead proceeds slowly from point A to point B, the only red flag to show you something is wrong is a poor cycle time.

WIP Limits

Smart traffic systems address this by creating lights that let one vehicle at a time onto major roads, controlling the ingress of traffic into the system. Kanban (and DevOps) has a similar concept; Work in Progress (WIP) limits. In the original Toyota manufacturing plants that spawned Lean & Kanban, workers could pull a cord when work went past the WIP limit, whereupon everyone else in the team would stop their work and ‘swarm’ on the problem.

The intent is that the ‘swarming’ team members will work together to solve the problem and feed the learning from that into improving the system. In systems with high psychological safety, this is what happens. Without that safety, the cord is likely never to be pulled, even when everyone in the system knows the issues. As a result, one of the first steps after implementing a WIP limit is to support those pushing back against work that breaks them.

Limits enforce collaboration

In addition to increased focus when there are less tasks going through a system, WIP limits enforce collaboration. Even if your team doesn’t ‘swarm’ as in Kanban, introducing a WIP limit that is lower than the number of people in the team means that each stage of your process, team members will need to collaborate, by pairing or cooperating in another manner. The lower the limit is set, the more cross-skilling the team will get, though there is a point at which cycle time will increase, and you are instead investing in team training and more considered work, rather than throughput. Like all agile systems, they work best when implemented in an agile way; the team should choose the WIP limit that makes sense for them, and they should inspect and adapt the limits as they see the results of implementing them.

A good team is a cross-functional one, which will mean you have a variety of skills and specializations. But if we build a system with WIP limits, and enforce specialization, then the moment that specialist isn’t available, we can predict what will happen.

It’s not realistic that we expect everyone to code or do other specialist work, but there is a considerable amount of the product development process that can be done outside those areas of specialization, and your team will be better at this the more they collaborate. Developers should be comfortable testing each other’s work. Testers should collaborate with developers on acceptance criteria. Designers should collaborate with product owners, and so on.

Create a Pull System

Kanban was designed to be a pull system, only accepting new work as work is finished. Scrum ingests groups of tasks in pre-defined amounts. Regardless of the approach you use, the important point is to optimize the flow through the system. WIP limits are a leading indicator of full lead time, which is the time between the customer requesting a particular business value and receiving it in production. Concurrent streams of work on the other hand, are shown to reduce productivity. So the question your team needs to ask is what is the right limit for them.

To build a sustainable system, something must finish before something new starts. Success isn’t about starting things, it’s about finishing them.

DevOps is a pretty popular term right now. It’s well established in many companies and even made it into frameworks like SAFe 4.5. But what actually is it? Most people are aware that it involves automation, but deeper understanding seems to be rare.

Depending on where you stand, you may have different views.

It’s Development and Operations

This is probably the most commonly provided explanation, but it’s also one that doesn’t convey the intent of DevOps. In practice, it’s less about Development joining Ops, and more often about development taking over operations, or operations learning to code. Your people determine your approach, and if you a highly development-oriented team, you are likely to have developed, custom solutions, whereas an Ops team who have upskilled are more likely to pull existing building blocks together and script them into repeatable patterns. Both approaches have merits. Everyone needs to learn to code, everyone should be prepared for a call if the system goes down.

It’s a tool chain

The most concrete manifestation of DevOps in an organisation is an automated pathway to production or at the very least, having systems set up by a group of people who say ‘containers’ a lot. It’s very easy to look at the various mechanisms for orchestrating infrastructure, such as Chef, Ansible and Puppet, container technologies such as Docker and Kubernetes, and say those things are DevOps. But those things are Continuous Deployment; vital to be sure, but not the full picture.

Let’s not discount these though, without these tools, DevOps is not possible.

It’s a methodology

If technologies are the what, the principles of Flow, Feedback and Continuous Learning are the why.

Lean, hailing originally from analysis of manufacturing value streams, teaches us that you first need to make work visible, then you need to limit the number of concurrent activities to expose bottlenecks. These Work In Progress (WIP) limits allow you to identify what is slowing the overall system down, and gives you the free hands to address it. The best leading indicator we have of quality and customer satisfaction is a short cycle time, and this in turn is best predicted by a small batch size. By breaking work down into smaller units and processing them sequentially and fast, we can do better work than attempting five things concurrently.

The sooner you learn something is not as it should be, the cheaper it is to fix. Waterfall projects have taught us this the hard way, and agile seeks to address this with iterative development. However, if you can fix an issue before an iteration is complete, you’ll go faster still. Take static code analysis; it’s not going to catch design flaws or incorrect stories but what it does catch will be instant, in the developer’s IDE.

With all these metrics and limits, we will find problems. Problems can be addressed, and the metrics will tell us how well those changes worked. Continuous Learning is core to DevOps, since it exposes many opportunities to work on the overall system. Changing systems can be tricky though; if there isn’t trust in a team, they will wait to be directed. If there isn’t psychological safety, they will optimise for their well-being, not for the system’s. The easiest way to build trust, is to trust, and if there is a failure, address the system, not the people.

So what actually is DevOps?

DevOps is a culture, where work in progress is limited to go fast, feedback loops are created to build knowledge early, and the learning gained is incorporated back into the overall system. It is a culture typified by high-trust, high-safety, and a scientific learning mindset. It is supported by a tool chain of automation technologies, which are in turn enabled by teams with the right training and mindset to optimise them to support the business.

With a summary like that, you can see why it gets shortened to ‘Dev and Ops working together’. But it’s worth doing the whole thing, and doing it properly. Trust me.

I’ve used Kali Linux as a daily driver on my Dell XPS 15 for most of the last year, and it works well for that purpose. There are a couple of things you need to do when setting it up to get it to run smoothly though.

Before you start

You need to change the following two settings in the BIOS. Now is a good time to set a BIOS password if you haven’t already.

  • BIOS > Secure Boot > Disabled
  • BIOS > System Configuration > SATA Operation > Switch RAID to AHCI

You can upgrade the BIOS using the boot menu and a FREEDOS-formatted usb stick with the latest firmware .EXE. Firmware versions 1.2.10 through 1.2.16 of the firmware have been associated with a series of bugs, but with December’s news about Meltdown and Spectre you will want to update to 1.6.1 or greater so that you have the mitigations for those exploits.


Install Kali Linux with a USB. I used rufus on Windows to DD a copy of the amd64 ISO directly onto the USB stick. Etcher is another fine choice. I chose to use the whole disk – I’ll virtualize Windows rather than dual boot it.

Whilst installing, you will get a request for additional firmware – brcmfmac43602-pcie.txt, which I’ve been unable to find. Some guides reference using brcmfmac43602-pcie.bin instead, but the installer doesn’t accept that in place of the .txt file. Just skip this, and wireless will work fine anyway. You may find the bluetooth doesn’t work that well – I chose to spend $15 and get an Intel AC8260 card and replaced the existing one.

After the initial installation, make sure your installation is up to date.

This will take some time, and you should reboot afterwards.

Everyday user

Some programs (VLC, Google Chrome, Visual Studio Code, etc.) object to being run as root, and I want to use different programs depending on what I’m doing, so I create a normal user for daily use.


Since this laptop has an intel and nvidia graphics card, installing optimus will allow you to access the nvidia card for those programs that require it. Reboot after installing. In my case I had to reboot twice – it failed to boot the first time for some reason.

You will need to add your everyday user to the bumblebee group as well

Once that’s done, it’s time to update some config files. Firstly, edit /etc/bumblebee/bumblebee.conf and change line 22 from:

Then run ‘lspci | grep NVIDIA’ to get your graphics card’s BusID. Mine is:

Then edit /etc/bumblebee/xorg.conf.nvidia, uncomment the BusID line, and update it if yours is different.

This should get everything working. You can see the two cards working by running:

If you run glxgears with both, you’ll notice the performance is about the same, which isn’t right. To fix this, install VirtualGL, which has to be downloaded separately. Go to https://sourceforge.net/projects/virtualgl/files/ and download the latest amd64.deb, and install it:

After that, you can run glxgears / optirun glxgears, and you should see a noticeable difference. If you have an everyday user account you want to use in a similar fashion, you’ll need to add it to the bumblebee group. This now gives you the ability to use the nvidia card for password cracking, but note that in most cases, offloading password cracking to a cloud instance is a better approach than running it on a laptop.


So that the OS can tell the temperature it’s operating at, and control the fans, you will need to install lm-sensors, and activate them

When sensors-detect asks if you want to make changes to /etc/modules automatically, say yes. Then make sure it loads on next boot:



Run the following to set precise screen dimensions:

The hidpi display is readable in its initial state, but if you prefer different settings, you can go into gnome-tweak and alter the scaling size of the font and windows.

In a similar vein, to avoid a tiny GRUB screen, edit /etc/default/grub, and add GRUB_GFXMODE=1024×768. You’ll probably also want to set the timeout to zero to make it boot faster. Once that’s done, run sudo update-grub.

You have the option of scaling QT programs, such as VLC. I personally don’t use this setting, and instead look for GTK3 software, with the knowledge that it will support scaling. You can scale QT programs by creating a script in /etc/profile.d/, called qt-hidpi.sh. In that file, put:

The results are usable, but not great. Read this article for more info.

Other Stuff

Fix smartd

smartd monitors your SMART capable devices for temperature and errors. Unfortunately, NVMe support is still experimental for smartd, so it doesn’t scan for it by default, and fails on boot. You can fix this by telling it to scan for NVMe drives by adding -d nvme to /etc/smartd.conf in the DEVICESCAN line. Make the first uncommented line in /etc/smartd.conf look like this:

Touchpad/Touchscreen Gestures

To get pinch, zoom, and other gestures working, we need to install libinput-gestures, which has some dependencies, and requires a config file.

Then you will need to switch to your everyday user account and create the following config file in  .config/libinput-gestures.conf :

Then, as your everyday user account:

Enable Printing

This requires the cups service to be installed and started:

Get rid of the On Screen Keyboard

Install this Gnome Extension: block-caribou. This is a bug, and you won’t need the extension permanently.

Save Power

First, install the following packages  tlp tlp-rdw powertop . The first two are a for a power tuning daemon for laptops. You can activate it by enabling the service:

Powertop is a power usage analyser, which will recommend settings that you can apply. Ideally you should create unit files and configuration changes for each recommendation, but for a quick and practical approach, you can have powertop make all the changes for you. Create the following file  /etc/systemd/system/powertop.service and input the following:

Then issue the following commands to let systemd see it and activate it:

And finally, confirm by running powertop that all settings are set to good by default.

With all of the above done, you should have a reasonably stable and working machine. There are still a few bugs that have been fixed upstream which will improve your experience as they come through to kali. I’ll post another update if I discover anything new worth sharing.


Things that don’t work


First step is to install bluetooth support with apt install bluetooth and rebooting. According to this post, you also need to download the windows firmware, and copy it into /lib/firmware/brcm . Enable the bluetooth service and reboot.

After a reboot, bluetooth will somewhat work. I was able to pair with a Logitech Anywhere MX 2 mouse, and use it with a small amount of lag, but I also tried Bose Soundsport wireless headphones, and they don’t work at all. I’m ordering an Intel 9260 wireless card to see if that solves the problem.




Full disk encryption requires you to enter a password on boot, and isn’t the smoothest experience. It is the best approach from a security point of view, but I’m a believer in practical compromises. With linux, for me that means transparent home folder encryption.

First of all, make a copy of your home directory, so that this doesn’t become a fancy way of wiping your computer. Make sure you are not logged in as the user whose directory is being encrypted, otherwise you will get a failure saying that ecryptfs cannot proceed.

Once this is done, you should generate a key for recovery, by running  ecryptfs-unwrap-passphrase as the encrypted user.

For complete protection, if you can live without hibernate/resume capabilities, you can encrypt your swap space (you’ll still keep suspend/resume) by running  ecryptfs-setup-swap.

Note: While you can set this up for the root user, do not do this, and make sure you only update software from the account that has had it’s files encrypted. Otherwise, when updates need to make changes to your .config directory, they won’t be able to, and you may be left with an unusable account. I learnt this the hard way. For safety, I also recommend adding the following to your root’s .bashrc:

From this point, you should really only use apt when your encrypted user is logged in.

GDPR EU logo

The answer is maybe. There are a lot of consultants making a bundle off GDPR at the moment, selling opinions. What is definite is that we have the wording of the legislation, prior EU laws and guidelines.

What is the GDPR anyway?

The General Data Protection Regulation (GDPR) is the largest overhaul of EU privacy laws in the last 20 years. Because of the interconnectedness of today’s trade and the extraterritoriality clauses in this regulation, it is probably the most significant set of privacy laws in the world. The laws will be enforced from the 25th May 2018.

Can the EU fine someone outside their borders?

Yes. The GDPR is based on international law, which has been agreed and negotiated. Even if an institution has no physical presence in the EU, GDPR fines can be enforced. The maximum penalty for infringing these laws is the greater of €20 million or 4% of worldwide turnover. These ‘effective, proportional and dissuasive’ fines can sound scary, but it’s unlikely the maximum penalty would apply, except in the most egregious cases.

Who does it apply to?

If you are based in the EU, the GDPR applies. If you are not based in the EU, then it depends on whether you offer goods and services to EU citizens, free or otherwise, or you monitor EU citizens, regardless of where they are being monitored.

But does that mean that the laws apply if someone with an EU passport does business with you in a non-EU country? The laws as written say yes, but there is a limit to how practical this is, and the GDPR is not designed to put companies out of business.

This is where we enter the realm of opinion. A common school of thought goes that if the company does not target EU citizens (such as by having a German translation of their site, or a .fr domain name), does not directly do business with the EU, and does not monitor their citizens, then the GDPR does not apply. However, at this stage, it is still unknown if that will be how the laws will be applied.

What if I’m doing business with the UK?

The UK will be implementing the GDPR regardless of Brexit. Only if the UK subsequently chooses not to join the European Economic Area (EEA) will the GDPR no longer apply. If this occurs, the UK will still need to implement equivalent protections to facilitate trade with the EU.

What next?

Much of how the GDPR is implemented will depend on legal precedents set after it is implemented, and until then, opinion (backed by various levels of expertise) is all we have. If this is an issue your business is going to face, you need to have your specific circumstances reviewed by a group with expertise in EU law (and probably not from a blog article). May next year is fast approaching, and the clock is ticking.



After spending a reasonable amount of time running Linux on the Dell XPS 15 (9550), I can say that the only hardware I can’t get to work reliably is the Bluetooth support. I’ve had partial success, but really this is something I just want to work when I need it. The solution is to change out the existing Broadcom card for a cheap Intel AC 8260 card (cost me AUD $40), after which I now have good WiFi and Bluetooth support. Provided you have the right hex tool, the Dell XPS is easy to open and upgrade:

Hex screws from the laptop

The Intel AC 8260 is a 2×2 card, rather than the 3×3 Broadcom, so the last grey wire will just hang loose in the chassis – not optimal, but not a problem either. At some future point when a newer Intel 3×3 card comes out, I might upgrade again.

Wifi card connected to the laptop

I also chose to upgrade to 32gb of RAM at the same time, to assist with running virtual machines – I went with the G.skill Ripjaws DDR4-2400 32GB(2x16GB) F4-2400C16D-32GRS SODIMM set. There are no tricks to this, it’s straightforward as you would expect – pull the holding tabs to the side, pop out the SODIMM, and put in the new one.

Open Dell XPS 15 laptop, showing internals

All in all, this process took about 5 minutes, and was quite straightforward. Kali detected the new hardware on first boot, and WiFi worked immediately. I had to powercycle Bluetooth to get it to work:

And that’s it. The RAM was a little pricey, but the WiFi card was pretty cheap, and now the only issues to resolve on the laptop are scaling ones, which will be dealt with over time as more applications adopt GTK 3+.


Most of the guides I’ve found on how to do this are fairly involved, requiring you to build from source and install without a .dpkg, which is messy if you ever want to change your installation. Installing Node.js is the same as for Debian:

The package  build-essential is required for compiling and installing native packages, but it’s already included in Kali’s base image.

Anyone spending a decent amount of time in Kali is going to want a GUI code editor, and they’ll probably want something a little more advanced than gedit (which is currently unmaintained as of writing). My preference is Visual Studio code, though others swear by Atom or Sublime Text.

Visual Studio Code running in Kali Linux

Since Kali is a Debian-based distribution, you can add it much as you would Debian or Ubuntu:

If you have previously installed the VSCode .deb package, you will likely get some warnings that dpkg can’t remove some directories that aren’t empty, but this won’t interfere with the operation of the program. You will get a warning each time you open it as the root user, since that’s generally not a good idea on most systems – I haven’t found a way to suppress this thus far, but maybe that’s not a bad thing.

I’m recording this because I haven’t come across any other good explanations in my googling. If you are using WSL for web development, it’s likely that you are going to want to install mysql. Unfortunately, when you run it, you start to get errors like “Can’t start server: Bind on TCP/IP port: Address already in use”. If you do get these, it’s most likely because you’ve followed a set of instructions and skipped something in the preamble – you need to be on the latest version of windows.

I assume you have joined the Windows Insider Program, and installed WSL in the first place. Next, make sure you have the most recent version of windows using the upgrade tool.

Once that is installed, and you have been through many reboots, upgrade WSL:

If you run into any problems reinstalling mysql, it might be this bug, and you can find suggested solutions in the comments. That got it working for me, but if you still have problems, you can always reinstall WSL from scratch by opening an administrative powershell window, then running lxrun /uninstall , then lxrun /install . Remember that if you have installed MySQL for windows, you’ll need to run WSL on a different port (change in /etc/mysql/mysql.conf.d/mysqld.cnf ), or uninstall it.

(First posted on the Agile Australia blog, 19/07/2017)

At least once a fortnight I find myself filling out a Request for Proposal (RFP) describing my team’s development approach, and how we secure our Systems Development Life Cycle (SDLC). We have a formal security framework; they’re great for filling out RFPs. When you are trying to build products in an agile format they are less so. The traditional process looks something like this:


Traditional Secure SDLC


Security is often an afterthought and never bolts on as well as when it has been considered from the start. This is to our advantage as well – we want each iteration to be shippable, and the sooner we can find issues, the cheaper they are to fix. Fortunately, with a little thought, building with security can become an agile process itself. Consider the following:

Incorporate Threats with Personas

If you use personas, try adding some who don’t have your best interests at heart. A few of these will be example attackers, along with the motivations they might have for hacking your product, and others will be legitimate users. Try including the potential harm that your authorised personas could inflict unintentionally, such as deleting the wrong information, or setting a password of ‘123456’. Ideally, your product should protect the personas from themselves. If you do UX research, consider asking users questions about mistakes with sensitive data that the software has allowed them to make.

User Security Stories

Incorporate User stories that model the behaviour the product should have: e.g. “As a user, I want my information to be private so that other users cannot view it”. Also, consider the attackers as sources for stories – e.g. “As an attacker, I should not be able to deny access to the site, so that legitimate users can reach it”. The stories don’t need to define the controls to be implemented, so they can be written without technical security knowledge, and focus on the behaviour that’s important to users. The team can then decompose the story into specific technical requirements as the backlog is refined.

Definition of Done

Include security criteria into the Definition of Done. This is a good opportunity to include minimum security criteria (the OWASP Proactive Controls are a good reference for this) on input validation and other common security issues that should always be considered. This provides clear guidance on what should be in place before a feature is considered shippable. You will need to walk a fine line between adding too many implicit security requirements, and breaking security jobs out into their own stories so that you can still break your backlog down into manageable chunks.

The team should evaluate the delivered code at every Sprint Review, and have the authority to decide if they are done. This allows people with the best technical understanding to make a decision on whether the product is safe to ship. If all the security criteria have been met, then it’s up to the Product Owner to approve any residual risk before the iteration is shipped, or to add further backlog tasks to address those risks.

Avoid Bottlenecks

Security adds overhead, so to keep the process as lean as possible, automation needs to be used wherever practicable. Static code analysis should be incorporated into the build process so that it becomes part of the engineering process. This brings the discovery of common problems into the developer’s IDE and allows them to be fixed much faster than the same problem discovered in testing. Security should then be considered during manual code review to catch the issues that static analysis cannot find.

Automated tests should be written to verify controls in the business logic so that each user can only perform those actions they are supposed to. Further automation should include fuzzing and vulnerability scans, though this may involve changing products, as only some support being scripted into a CI/CD process.

There is quite a bit of work in setting all of this up, then tuning it so that you aren’t overwhelmed with false positives. You can’t build a massive verification infrastructure before actually working on the product, so adopt the agile approach for this too, and iteratively improve what you have automated in each sprint, then maintain it once it’s in place.

After Delivery

Regardless of what process you use, once you release your software into production, you need to have an incident response plan in place. This is likely to involve every part of the business, and for the team building the product, it means thinking through how issues will be identified, escalated, fixed and redeployed. This becomes a DevOps process and may be handled by a different team, but ideally, it should not be. It’s important that the team takes ownership of security, and learns from any incidents that occur.

Meaningful metrics in software development is difficult, but you need to be able to measure the impact that you are having. Some practical metric examples include: mean time to fix security bugs found in production, mean time between failures/application crashes in production, and mean time to recovery afterwards. You can produce many objective metrics from code analysis tools, but unless you are bringing a legacy codebase into line, they provide limited insight.

Hopefully, some of these ideas will resonate with those who are moving away from a process heavy security SDLC. As a parting thought, having a dedicated specialist in your sprint teams is ideal, but you aren’t going to get anywhere if security becomes that one person’s problem. Everyone needs to be aware of it, everyone needs training, and it needs to be a team responsibility.

Resources & References: