Last week in the US the FCC privacy regulations were repealed, which, amongst other things, allows ISPs to track your internet usage and sell it to third parties. It’s a good time to think about privacy.

Windows 10 doesn’t have the best record on privacy. Most app teams need to get data about their users to improve their products, and Microsoft is no different in that respect. If you want to look deeper into the issue, you can read Microsoft’s reasoning for their data gathering, and the EFF’s criticism of it.


Improve your Windows Privacy

There are multiple tools to turn off Windows 10 telemetry, depending on what services you are prepared to go without. There is a slightly melodramatic naming convention for these tools that ever so subtly hints at what their authors might say on this topic if you got a few beers into them.

  • Destroy-Windows-10-Spying adds host entries to block telemetry servers, and shuts down a range of Windows tasks that try to report your data
  • O&OShutup10 gives you a fast way to disable all the privacy affecting settings in Windows, and provides guidance with each one. Don’t tick everything, especially the ones with red exclamation marks next to them
  • fix-windows-privacy will disable a wider range of tracking via the registry, including removing OneDrive

Remember to run these after any major Windows update, as Microsoft has turned tracking back on with some of these in the past.

If you have an NVidia card, they send telemetry home as well, but it seems to be mostly harmless so far. Instructions to turn it off are here.

Depending on where you are living, the sites you visit may also be logged by your ISP, for government use. In Australia, that metadata is held for two years, in the UK it’s 1-2 years, and if you live in the US it’s now a commercial product that can be sold to, well, anyone really.

A VPN is the only real defence against this, but it is of limited use if you still refer to your ISPs DNS for name resolution. You can lower the amount of data collected about you by selecting a DNS provider that does not keep logs, and uses the dnscrypt protocol to sign communications, making the responses harder to spoof. Note that dnscrypt does not provide privacy without a VPN.

For a simple solution, you can change your DNS servers to OpenDNS or Google DNS. Both keep logs, which isn’t ideal, but they aren’t exactly known for handing them over. A better solution is Simple DNScrypt, which gives you non-logging options, and implements the dnscrypt protocol


Improve your browser privacy

Your browser broadcasts a lot of information. If you are signed in on Facebook, and you visit another site that has placed an link on their page, Facebook knows about it.

There is a ‘Do Not Track’ setting in most browsers these days, but the best approach is to install EFF’s Privacy Badger extension, which will detect and block sites tracking you. Privacy Badger is available for Chrome and Firefox. If you use Safari, consider installing Ghostery instead. What if you’re using IE? Stop using IE. There. I fixed it for you.

While you are there, you should install HTTPS-Everywhere and uBlock Origin (Chrome / Firefox) to remove potentially malicious ads and upgrade insecure connections where possible.


Improve your social media privacy

Make sure you are happy with the list of apps connected to each of your social media accounts, because each of them is likely to be recording as much information as possible.

And if you live in the US, I’d also recommend opting out of the various services who index information on you from publicly available records. This article eloquently explains how to do that.


The last point I’d make about privacy is that it’s something that is important to maintain, even when you have nothing to hide. If 99% of mail was postcards, envelopes would be suspicious. There plenty of people with legitimate reasons not to want their privacy invaded, and by protecting your privacy, you protect theirs.

I recently decided to change my laptop over to Kali Linux. The Dell XPS 15 is a great laptop, but it has had a number of issues running Linux over the last few months. This time around it seems there have been enough upstream changes that you can get Linux running smoothly enough for everyday use.


Before you start

You need to change the following two settings in the BIOS. Now is a good time to set a BIOS password if you haven’t already.

  • BIOS > Secure Boot > Disabled
  • BIOS > System Configuration > SATA Operation > Switch RAID to AHCI

You can still upgrade the BIOS using the boot menu and a flash stick, but versions 1.2.10 through 1.2.16 of the firmware have been associated with a series of bugs, so if you are going to update, make sure it’s to 1.2.18.



Install Kali Linux with a USB. I used rufus on Windows to DD a copy of the amd64 ISO directly onto the USB stick. I chose to use the whole disk – I’ll virtualize Windows rather than dual boot it.
Whilst installing, you will get a request for additional firmware – brcmfmac43602-pcie.txt, which I’ve been unable to find. Some guides reference using brcmfmac43602-pcie.bin instead, but the installer doesn’t accept that in place of the .txt file. Regardless, wireless works fine, so I’ll figure that out later.
After the initial installation, make sure your installation is up to date.

This will take some time, and it’s worth rebooting afterwards.


Since this laptop has an intel and nvidia graphics card, installing optimus will allow you to access the nvidia card for those programs that require it. Reboot after installing. In my case I had to reboot twice – it failed to boot the first time for some reason.

Once that’s done, it’s time to update some config files. Firstly, edit /etc/bumblebee/bumblebee.conf and change line 22 from:

Then run ‘lspci | grep NVIDIA’ to get your graphics card’s BusID. Mine is:

Then edit /etc/bumblebee/xorg.conf.nvidia, uncomment the BusID line, and update it if yours is different.

This should get everything working. You can see the two cards working by running:

If you run glxgears with both, you’ll notice the performance is about the same, which isn’t right. To fix this, install VirtualGL, which has to be downloaded separately. Go to and download the latest amd64.deb, and install it:

After that, you can run glxgears / optirun glxgears, and you should see a noticeable difference. If you have an everyday user account you want to use in a similar fashion, you’ll need to add it to the bumblebee group. This now gives you the ability to use the nvidia card for password cracking, but note that in most cases, offloading password cracking to a cloud instance is a better approach than running it on a laptop.



So that the OS can tell the temperature it’s operating at, and control the fans, you will need to install lm-sensors, and activate them

When sensors-detect asks if you want to make changes to /etc/modules automatically, say yes.


The hidpi display is readable in its initial state, but I prefer some scaling. Open up gnome-tweak, go to fonts and set the scaling to 1.25, then windows and set the scaling to 2.

In a similar vein, to avoid a tiny GRUB screen, edit /etc/default/grub, and add GRUB_GFXMODE=640×480. Once that’s done, run sudo update-grub. Higher resolutions are available, but they don’t look great.

QT programs, such as VLC will also render with tiny controls. You can improve this by creating a script in /etc/profile.d/, called In that file, put:

The end result isn’t perfect, but it’s very usable. See this article for more info.


Everyday user

Some programs (VLC, Google Chrome, Visual Studio Code, etc.) object to being run as root, and I want to use different programs depending on what I’m doing, so I create a normal user for daily use.


And that’s it! Kali should be ready to fill with your preferences and utilities of choice. If I run into any further issues, I’ll update this article.


Security ‘hardening’ is the process of raising the baseline security of a device. I harden every device I use. It’s not my intention to provide a hardening guide here (I’ve linked several good ones at the end), but I did want to go through some of the resources available if you need to do this for a group of computers (your organisation, for example).

Locking things down

When most people think of security hardening, they picture covering the basics – uninstall programs that aren’t needed, install the ones that are, get any available updates and add an antivirus program. Hopefully this includes a fresh windows installation, checking the BIOS settings, adding some sort of full disk encryption (Bitlocker, FileVault, etc). Depending on your approach it might also include EMET and a variety of vendor-based solutions.

But where do you go from there?

There are an number of settings you can change to improve security in Windows 10, but some of them will be reset any time there is a major windows upgrade. The one type of setting Microsoft seems to honor over time is anything set by Group Policy Objects (GPOs).

This should be familiar territory for most systems administrators, any you can get secure baseline settings for each Windows 10 build from Microsoft at their Security Guidance site. Be aware that, depending on your requirements, Microsoft’s settings will probably not go far enough, since they want to get telemetry from your systems. This isn’t sinister, but it should be understood.

This makes a good starting point and the next steps should be to source additional settings advice from a the below organisations, then finish with a manual inspection of the policy settings.


Building a Baseline

Various Governments offer advice on what a secure baseline should look like. Settings/GPOs are part of this, but aren’t the only steps that should be taken. Here are some guides from the countries I currently deal with:


The Australian Signals Directorate provides high level advice in the form of their Information Security Manual, but once that gets down into details, it directs the reader to the Whole-of-Government Common Operating Environment build guidelines, the public version of which is only for Windows 7 SP1, and still in draft state. This is apparently produced by the Department of Finance, whose cyber security credentials I am unaware of. In practice, I expect the ASD consults directly with the organisations they are protecting, rather than publishing their defaults.



The NSA has provided a significant quantity of advice, including information on Windows 10, broken down into short advisories. Unfortunately this doesn’t provide a comprehensive blueprint for building a security baseline, unless you want to read all 112 documents and assemble something cohesive out of them.

Also produced by the US government, NIST provides baseline settings, including importable GPOs, but it doesn’t yet include Windows 10. NIST also produces a range of standards (SP 800-53, etc) which are considered an industry benchmark, but they are also some of the least readable.

The USA is also home to a non-profit organisation, the Center for Internet Security, which does produce baselines for Windows 10, including importable GPOs. This is the best advice I’ve found thus far.


Probably my favourite of the government guidance websites, the UK government’s National Technical Authority for Information Assurance (CESG) has produced a readable Windows 10 guide. It’s still relatively bare-bones, and doesn’t include importable GPOs, but it’s still ahead of the curve, since it actively attempts to communicate the risks and solutions in a concise format.


Manual Review

Once you have a My preference is to build a custom baseline that fits what you do (Press Win + R and run gpedit.msc to review individual settings). A quick walk through google shows a range of resources for Windows 10 hardening, but if you take one at random, you are trusting that they are complete, and correct. That’s not to say they aren’t of use, but confirm everything before you add it to your baseline configuration. If you are thinking this sounds like a lot of work to do and keep up to date, you are correct.

If you are just securing your own machines, consider Tron Script as a starting point.


Securing the User

Ultimately, the easiest point of attack will always be the user. There is a limit as to how much you can do this via a secure baseline, but you can enforce policies on access, on mobile devices, etc.

If that user is you, you should at a minimum be using a recognised, commercial VPN when outside your home/office network, and enable two factor authentication (2FA) for any service you use. I tend to advise people to start with the least important services first, since that increases the chance the user will cover off all their social media accounts. has a comprehensive list of what services can have 2FA enabled, and via what methods. If you are securing a group of other people, then there is significantly more to do, which is beyond the scope of this post.


Rolling it out

If you are imposing these limitations on someone else, then make sure they are involved in the decision process, and accountable for the end result. You can add a significant amount of protection without sacrificing much usability, and if you start with a locked down baseline, and roll back protections depending on what is required, you can achieve a reasonable compromise. Lastly, make time to keep it up to date – these things change.


More info:


PCI DSS is the Payment Card Industry Data Security Standard, and it is required for any merchant, payment processor, or service provider that interacts with cardholder data. I recently went through the process of implementing this standard, and I thought I would share some of my observations on the process.


Do your due diligence – Several times I heard statements to the effect of “Surely XYZ payment provider is compliant – they’re owned by a bank!” and I found myself nodding in agreement. Then on checking the VISA and MasterCard websites, it turns out that some of the people who claim to be compliant are not. Maybe they were once, but it’s a big process to keep up to date with new versions of the standard, and clearly not everyone does it. The chain of compliance in PCI isn’t always as good as you would expect, so check who you are working with. Often it’s just a difference between trading name and the registered PCI name, but you don’t know unless you find out.

Similarly, I had assumed an existing secure architecture would be able to be plugged into PCI DSS without much modification. It didn’t turn out to be that simple, and a lot of reasonable assumptions we had made about our vendors turned out to not be entirely correct. For example, I learned that Azure SQL databases can’t be firewalled from Azure services owned by other customers, including people who set up a new trial account. There are other means of performing traffic isolation, but that isn’t something you want to find out just as you are going into implementation.


Communication is oxygen – Emotions tend to run high when change occurs, especially with a security framework  that most stakeholders won’t understand unless they have taken time to read it, which isn’t a realistic expectation. Without frequent and tailored communication, It can seem to others that PCI threatens their productivity and the stability of their processes.

This applies to external stakeholders too; depending on their familiarity with the standard, they may need to be educated on what they need to provide. If you can give effective, concise information to the people maintaining those relationships, then the process of ensuring stakeholders are compliant can be made less combative, and will happen faster. With PCI DSS, this can add up to a big impact on the overall length of time it takes to become compliant, since your company is only one part of the puzzle.


Don’t write a novel – Reams of documentation are not necessary; effective processes are. I’ve come across a number of people who champion putting together a giant slab of documentation, to cover every possible scenario. If your aim is purely certification, then that is an approach that will get you there. But people don’t read giant slabs of information, especially about security. Build a central matrix of where you meet the compliance documentation requirements, then locate the instructions for each process with that process. If you can back this up with workflow automation and some well thought out procedures, your policies will be followed because they are the path of least resistance.


Whilst compliance isn’t the same as security, PCI DSS does create a good baseline to work from, and is a reasonable standard to hold other companies to. Where you go after that depends on what your company needs from its security program, but before you move onto another standard, consider some basic steps such as using a vulnerability scanner on your full internal network, not just your CDE. Compliance works best when it’s partnered with practical tests.


More info:

I’ve spent the last weekend attending Ruxcon 12, which is a technical security conference in Melbourne. For the benefit of those who weren’t there, and because it helps me consolidate my own thoughts, I’d give the following review.


Ruxcon 12


All of the presentations focused on technical detail, and ranged from being quite accessible through to highly specialized. One example of an accessible presentation was a talk by Jack Forshaw of Google’s Zero Day Initiative, who ran through how he had downloaded documentation from MSDN and searched it for API calls with ‘reserved’ parameters for future use. These calls are a fruitful hunting ground for vulnerabilities, and he walked through the process of using differences between the documentation and implementation that lead him to reporting two zero days to Microsoft. On the other end of the scale, there was an excellent fuzzing presentation by Richard Johnson of Cisco Talos who presented a fuzzing framework that has recently incorporated IntelPT (Process Trace) and American Fuzzy Lop, which would be hard to follow for anyone not already somewhat familiar with the area.

The conference is two days in length and was held in the CQ function centre on Queens St. The age profile of most of the attendees ranges from early twenties to late fifties, with the majority clustered around late twenties, early thirties. Most people were quite friendly, though if I’m being honest, the crowd was light on extroverts. There are lots of pentesters, incident responders and reverse engineers.

Is there much here for someone who isn’t directly working on the tools? I would say yes. Most talks featured demonstrations of recently found zero days, and describe the research process in some detail. For anyone who hasn’t had much experience on the red team side of things, it’s a very useful perspective to add.

Some of the presentations don’t provide actionable information (unless you are reverse engineering malware) but they do give an interesting look behind the scenes at some of the services we use in enterprise. For example, Sean Park of Trend Micro gave a presentation on using neural networks with Fourier transforms to detect malware. He ran through how malware is normally detected in ‘outbreaks’ being transmitted over email, and how they go about building a template to detect that malware. Building these templates is made extremely difficult due to the metamorphosism built into the malware, which is where his research has been applied. His team have captured over 2000 binaries using honeypots, and uses a neural network that leverages the Fourier transform to compare machine behaviours to identify the same malware with different patterns.

There were talks on infrastructure, such as a presentation by Trevor Jay from Red Hat on the recent vulnerabilities that have been found in containers, and what defences containers add or expose.  The overall thrust of the talk was to persuade the attendees to go bug hunting, as the codebase for containers is still relatively immature, and the bugs being found are still large and to some extent reasonably basis. For those who use VMs rather than containers, there was a presentation by Qiang Li of Qihoo 360 on QEMU escapes that identified 50 bugs in the last year, 30 of which have resulted in CVEs. Layered defence is probably the key takeaway here, with Trevor Jay suggesting that if you need to use containers in a multitenant environment, having one VM for a container may be inefficient, but it does provide the isolation necessary to have confidence in preventing an attacker from moving laterally between containerized applications.

One of my favourite presentations was on the topic of dangers within AWS, presented by two Atlassian employees, Daniel Grzelak and Mike Fuller. They outlined a range of issues, from users mistaking the AuthenticatedUsers group to be their users as opposed to all AWS accounts (including new signups), to assumptions in security via roles. When you grant a 3rd party a role within your account, you are actually granting anyone in their account access to that role, including anyone who they have granted access to a role within their account, and so on. Worse, role assumptions are only logged in the assumer’s account, so you will have no logs of who is assuming that role within your environment, except that it happened. They also detailed issues such as cloud trails not logging a number of things, including Route 53 calls and Lambda functions, and some of the dangers of using pre-canned AWS roles, which can be updated to include permissions to new API calls when Amazon updates them, even if you never intended the 3rd party to get them.

If you haven’t been to a convention like this before, you should expect presentations with a lot of code examples in c & python that interact directly with syscalls, as well as a lot of dissassemblies. If that isn’t your thing, don’t let that scare you off – many of the presenters summarise the information presented very effectively.

If anyone is interested in going to Ruxcon next year, let me know – it’d be great to meet up.


More Info:

Checking things at part of due diligence is rarely the most fun activity in the world, but it does have a habit of turning up some surprising things. I’ve been doing some compliance checking for PCI DSS recently, and it turns out a lot of the providers I thought were PCIDSS compliant (and claimed to be) aren’t.

MasterCard and VISA (doesn’t work in Chrome) both maintain authoritative PCI DSS lists, so if you use any payment providers, it’s worth checking them in there. A lot of the institutions that we take for granted must be compliant are no where to be found on that list. I was surprised to find that Microsoft Azure isn’t on that list.

It turns out there are extenuating circumstances in Azure’s case, and they have been audited to the standard by Coalfire, they just aren’t part of the program… I do feel like there is some marketing spin in there somewhere; my suspicion is that all-windows infrastructure doesn’t lend itself to single-tenanted environments on the scale that Azure needs to achieve, so they probably multi-tenant a lot of their systems more than PCI DSS is comfortable with.

The other thing that has got me thinking is the discovery that Azure SQL servers can be connected to by anyone inside Azure, whether they are part of your organisation or not. As stated in Azure’s documentation:

To allow applications from Azure to connect to your Azure SQL server, Azure connections must be enabled.

Theoretically that means anyone could spin up an Azure VM and try and connect directly to your database, something which most enterprises would be deeply uncomfortable with. I still haven’t formed any long term conclusions, as this is still something I’m researching, but it is food for thought.

Due diligence is not something to skip.

The unthinkable has happened, and it looks like linux is coming to Windows. I think a lot of people have imagined what the conversation was like where they decided to do that. It’s a great idea. The Microsoft of old positioned Linux as a competitor, but really it’s just a tool, and one they can add to their toolset in a way that no one else can. Having a separate blog on Cygwin doesn’t make much sense in light of this news, so I’ve imported all the posts, with the intention of continuing my main blog on things that interest me; technology, security, digital trends and cat pictures.

According to Verizon, 9.4% of breaches last year occurred through vulnerabilities in web applications. A lot of these vulnerabilities were SQL injections and the like, which really shouldn’t happen these days, especially when you consider that most professional companies should be using a framework for development. However, many of the other potential vulnerabilities can be reduced by tightening the scope of your server config.

This is where is a great tool. It’s a project of Scott Helme’s and it does what is says on the label – checks the headers of your server for security improvements.

Example of output

(I’m not using public key pins because LetsEncrypt certificates only last 3 months, and I’m going for low maintenance… until I can automate it)

Using this site to check around, it becomes clear that, since http headers are invisible to most people, they aren’t being applied. There is a lot of security in the world that is just theater… and in the physical world that does work. Unfortunately for the digital world, it is possible to check every door and try to pick every lock. Security consultants will have work for many years to come; but their typical clients will be people who have suffered a breach. There is a natural cognitive bias against risks that we personally have experienced, and it’s alive and well in cybersecurity.

btw, if you use apache, check this out for config examples.

This isn’t really a cygwin post, but this site has now been given an encryption certificate via letsencrypt. The whole process on debian, from investigating what had to be done, cloning the git repo, and running the single command to create, retrieve and install 5 security certificates took about 3 minutes. Easily the most impressive security service I’ve seen.

mintty is a fantastic terminal program; it’s now the default with Cygwin for some time. There are a range of others such as xterm and rxvt, but mintty does the trick for me. You can change all the settings by right-clicking on the window and going into ‘options’, but that modifies a file called .minttyrc in your home directory, so you have the alternative of using a text editor if you wish. Mine goes like so:

What this does, amongst other things, is set the terminal to use the colours from the ‘solarized’ colorscheme, by Ethan Schoonover. Colourschemes are a personal thing, but this is a good choice. Another way of creating a colour scheme, or selecting from several other good options is to go to, selecting/creating a colour scheme, then exporting it, using the ‘minTTY’ option.


You may also notice above is a reference to the ‘anonymous pro’ font, which I am a big fan of. From the font’s readme file:


Anonymous Pro is the work of Mark Simonson, and you can find the font at his site here.