Indiana native. Purdue grad. Programmer / Dev Ops in trade. Dog owner. Husband and father. Have questions? Ask!
116 stories
1 follower

Client: Can you take a look at my machine? Every time I connect to one of the virtual servers, it...

1 Share

Client: Can you take a look at my machine? Every time I connect to one of the virtual servers, it just disappears from my list.”

Me: That’s odd! Can you show me how you connect?

They opened the client program, and double clicked on a server name. It launched with no problem.

Me: Hm. And how do you usually disconnect?

They found the server name in the list, right clicked it, and chose “delete” from the pop-up menu.

Me: …I think I found your problem.

Read the whole story
6 days ago
Central Indiana
Share this story

Why Is the Trump Administration Promoting Further Economic and Media Centralization?

1 Share

The Federal Communications Commission is planning to lift ownership restrictions on local media. This would allow, for example, ownership of multiple major TV stations in the same market by the same company:

In recent years, the local TV station business has consolidated rapidly, driven by both the growing fees that cable and satellite companies pay for the right to retransmit broadcast signals—bigger station groups can extract higher fees—and increasing competition from the internet. That has led to the emergence of a handful of “super groups” like Sinclair Broadcast Group , which today reaches 45.6% of television households, according to Kagan, a media research group within S&P Global Market Intelligence.

Relaxing regulations on local TV station ownership likely would spark a “bonanza” of dealmaking among station owners, according to station broker Larry Patrick, particularly among the independent station groups that don’t share ownership with broadcast networks like ABC or Fox.

The FCC has also repealed the “main studio rule.”   This rule required radio and TV stations to maintain a staffed local physical studio capable of originating programming. Regarding that republican FCC chairman Aji Pai wrote:

At our October meeting, we’ll also take another step toward the long-overdue modernization of rules governing the media industry. Following up on our Notice of Proposed Rulemaking in May, we’ll vote on an order that would eliminate the “main studio rule.” This rule requires each AM, FM and TV broadcast station to maintain a main studio in or near its community of license. This requirement dates to 1939, and was enacted in part to ensure that stations would stay accessible and responsive to the public. But today, this rule is unnecessary; most consumers get in touch with stations over the phone or through electronic means, stations’ public inspection files are mostly online, and technology enables stations to produce local news without a nearby studio. Additionally, the rule can impose major costs on broadcasters. Eliminating it would make it easier for new broadcast stations to operate in small towns and rural communities. It would also allow broadcasters to spend money currently used to comply with the rule on local programming, newsgathering, and other activities to better serve the public.

If you believe permitting further media consolidation and eliminating local studios is going to result in more local newsgathering and programming, I’ve got a bridge to sell you.

If we take a step back for a minute, we see that in community after community across America, the local institutions that once sustained them as viable places have been gutted or disappeared entirely: local banks, local stores, the local newspaper, manufacturing plants, etc. Some of these were lost due to technology and productivity improvements. In others, some people blame globalization.

But while globalization is well known and well-discussed as a force, there’s another one that’s less talked about. It overlapped with globalization, though started earlier. It was accelerated by globalization, but it is something that was happening anyway. That trend is the centralization of a number of American industries. I explained this in 2010 in a section of a post called “The Nationalization Age”, which I’ll quote in full:

Everyone knows about the [1990s era] tech revolution, but there was a concurrent development that was in many ways equally important. This was the nationalization of business.

Think again back to the 1980’s in a mid-sized or small city. Your hometown probably had three or so major locally based, publicly traded banks. Your state probably severely limited their ability to open branches, so the market was highly fragmented. Your town probably had a couple department stores that were either part of local or regional chains. This might have been true of discounters or even fast food restaurants. The local gas and electric companies were locally based. Only Ma Bell pre-1984 was a national utility, and a heavily regulated one. In short, while may industrial businesses were national in scope, there were still a huge number of industries that were incredibly fragmented into local or regional markets.

The deregulation of the 80’s and 90’s ended that. The end of restrictive banking laws put us where we are today, with a handful of major nationwide banks like JP Morgan Chase, along with a few odd surviving “super-regionals”. Utilities have been sold off. Department stores merged out of existence, perhaps most poignantly illustrated by the rebranding of Marshall Field’s flagship store in Chicago as Macy’s. Macy’s is truly America’s department store now. Wal-Mart and Target, once regional chains, are now ubiquitous. So too Walgreens, CVS, Home Depot, etc.

In short, the business landscape of your city likely changed radically during the 1990’s, as large numbers of locally based businesses, businesses whose executives formed the leadership class of the community, were bought out. (I wrote about one implication of this in my piece “The Decline of Civic Leadership Culture.”)

This also, incidentally, transformed the professional services industry. In 1990 virtually all of these industries were city office based. To be the office managing partner of the biggest office or headquarters city was a huge deal. But in the 90’s, as business changed, and as the level business domain expertise required to integrate technology into business strategies, processes, and organizations became much, much higher, all of these industries restructured into national practices based around industries, with P&L responsibility resting with the industry sector leads. That’s one reason I spent so much time on airplanes in my career.

Of course, this disproportionately benefited large cities in the middle of the country with big airports, where you could base lots of people and fly them conveniently around. Two big winners: Chicago and Dallas.

With so many businesses now large scale, deregulation continuing in vogue, and a post-Cold War end of history euphoria in the air, the stage was set for future liberalization of international trade regimes. Your local bank or store probably didn’t care much about international markets, but Citigroup and Wal-Mart sure did.

There were multiple factors prompting the roll-ups of one sector after another, but one of them was undoubtedly deliberate government policy. In many of these sectors, state and federal regulations were specifically designed to create a fragmented market and keep institutions locally based. The idea that all the banks in your town would be owned by companies in your state capital, much less New York, was anathema.

These rollups did coincide with a nice boom in the 1990s, but since 2000 results have just plain been bad. Barack Obama was the first president since Herbert Hoover to never once hit 3 percent annual GDP growth. President Bush’s economic record was likewise dismal. Job growth in the U.S. since 2000 has averaged 0.5 percent per year, compared to 1.9 percent during the 1980s and 1.9 percent during the 1990s. (Recent years have seen better growth rates than this anemic average.) And real median incomes are lower today than in 2000. Maybe these policies, globalization, etc. didn’t cause these bad results, but results subsequent to these rollups certainly don’t give a ringing endorsement.

When we think about the rise of the coastal and global city economies, we always hear about density, talent concentrations, collisions, and many other things. What we don’t hear about is the way that we specifically eliminated government policies that were designed to keep a handful of coastal cities like New York from dominating the economic life of the country.  The centralization of industries in these cities, along with the rise of global city services, etc. is a big part of what made them so prosperous today. They may not have the lion’s hard of employment, but they extracting an outsize share of the value.

While the President doesn’t directly control the FCC, I find it amazing that the administration of the guy who was elected president in large part because of the hollowing out of communities across the interior in part driven by this centralization would be promoting even more of it. Especially in media, where the collapse of newspapers and such has already had a profoundly negative effect on civic life.

All of this may lead to great economy efficiency in some macro sense. But we’ve already seen the price paid by the loss of these local institutions. Too many cities went from being branch plant towns to being branch everything towns – with no plant anymore.

The coastal folks who are appalled at populism might do well to consider their own role in creating the conditions that brought it about. And the Trump administration should seriously reconsider any regulatory moves designed to actively facilitate further economic concentration in the country.

Read the whole story
28 days ago
Central Indiana
Share this story

The 6-Step "Happy Path" to HTTPS

1 Share
The 6-Step

It's finally time: it's time the pendulum swings further towards the "secure by default" end of the scale than what it ever has before. At least insofar as securing web traffic goes because as of this week's Chrome 62's launch, any website with an input box is now doing this when served over an insecure connection:

The 6-Step

It's not doing it immediately for everyone, but don't worry, it's coming very soon even if it hasn't yet arrived for you personally and it's going to take many people by surprise. It shouldn't though because we've known it's coming for quite a while now starting with Google's announcement back in April. That was then covered pretty extensively by the tech press as well as on this blog where I wrote about how life is about to get a lot harder for websites without HTTPS. Then back in August, Google started emailing site owners and made it very clear what was coming:

But even before that, back in Jan we saw both Chrome and Firefox starting to flag any page with login or credit card fields as "Not secure" so this is just continuing that march. In fact, back at that time I wrote about how HTTPS adoption has reached the tipping point and I pointed to a range of facts supporting that, including the fact that in August last year, 14% of the Alexa top 1 million sites were now forcing HTTPS. But that wasn't the headline figure, rather it was the rate of change and 12 months later, that number was now 31%. Yep, more than doubled in a year.

This will only come as a surprise to folks who haven't been paying attention. Either that or those who, against all evidence, continue to argue that HTTPS is unnecessary. In August, I highlighted how SEO "experts" were advising customers against HTTPS based on fundamentally flawed reasoning. Fortunately, even these guys are seeing the light and realising that HTTPS is, in fact, now somewhat of a necessity. However, doing it right can be more difficult than many people think:

Well, it can be more difficult but it can also be fundamentally simple. In this post I want to detail the 6-step "Happy Path", that is the fastest, easiest way you can get HTTPS up and running right. Let's dive into it!

1. Get a Free Cert

This is the first thing most people think of when it comes to HTTPS - they need a certificate. There was a day when this would cost money and you'd pay a large company such as Comodo a fistful of dollars for them to issue you a cert, but those days are now behind us. There's now two primary routes you can go in my "Happy Path" and I want to detail both of those here:

Firstly, there's Let's Encrypt. They've had an enormously positive impact on HTTPS adoption by making certificates available not just for free, but in an automated fashion that takes a lot of the legwork out of installing a cert on your site.

There was a time when Let's Encrypt was a newcomer and understandably, some people were a little reserved about using them. But check out just how far they've come since just the start of last year:

The 6-Step

The only real vocal criticism of Let's Encrypt has (unsurprisingly) come from commercial certificate authorities. I wrote about this at length in July when I talked about the perceived value of extended validation certs, commercial CAs and phishing. That's essential reading if you've previously been hit with arguments ranging from "you need an EV cert" to "free certs aren't as good" and even "Let's Encrypt helps phishing sites". Read that now if you're not already across these issues and the FUD involved.

The one major practical barrier to Let's Encrypt is lack of first class support in PaaS and SaaS models. Last year I wrote about what's involved in loading a Let's Encrypt certificate into an Azure app service and I concluded that as it stands, it's a risky model. Still to this day, I would not use Let's Encrypt in this way, there's a much better way...

That way is Cloudflare. I love the service that Cloudflare provides because it's not just about HTTPS. Like Let's Encrypt, it's free if all you want is a cert on your site and it's also highly automated, but where it differs is that Cloudflare is a globally distributed CDN with 118 edge nodes around the world:

The 6-Step

What this means is that they don't just do free certs, they can also cache content, optimise traffic and block threats such as DDoS attacks. They can do this because your traffic literally routes through their infrastructure. Now on the one hand, this worries some people and frankly, those concerns are largely unfounded and I address them in my post on unhealthy security absolutism. On the other hand, when an intermediary has the ability to modify your traffic on the fly, you can do some enormously cool things and I'm going to keep coming back to those throughout the remainder of this post.

My strong recommendation is Cloudflare not because there's anything wrong with Let's Encrypt - quite the opposite, I think they're awesome - but rather because there's so much more value to be had from a reverse proxy. I run this blog through them using their free service and I also run Have I been pwned (HIBP) through Cloudflare which has made an enormously positive impact on the sustainability of the site.

Ok, so that's certs, but now we need to make some changes on the site too so let's jump into that next.

2. Add a 301 "Permanent Redirect"

What we need to start doing now is ensuring that whenever a request comes in over the HTTP scheme, the site tells the browser that instead it must request that same content over the secure scheme. We do that with the HTTP 301 response code which indicates that the resource has "Moved Permanently". This is accompanied by a "Location" response header which indicates the new URL the browser should issue the request to.

For example, imagine making an insecure request to this blog:


When my site receives that request, it will respond like this:

HTTP/1.1 301 Moved Permanently

The browser then turns around and makes a near-identical request to the first one, albeit it over the secure scheme:


How you implement this depends on your framework of choice. For example, in ASP.NET you could use a URL Rewrite rule. This is a simple configuration that can go into your web.config and constitutes a "no code" fix. Obviously, you're going to tackle a PHP or a Node site differently, but you get the idea.

An even cleaner way to tackle this is with Cloudflare. Remember that bit where I said there's a bunch of additional value to be had from a reverse proxy? How's this for an easy fix:

The 6-Step

Toggling this switch allows Cloudflare to add the 301 response header on the fly so that you don't need to modify the site itself. It doesn't matter what tech stack you're running on, you simply hit the button and it's job done.

The 301 is necessary, but there's also a couple of problems. Firstly, you're forcing the user's browser to make an extra request. Every single time they attempt to load any resource on the site insecurely, the browser is going to need to make a subsequent request to the secure scheme after the 301 comes back so it's not great in terms of performance. But more serious than that from a security perspective is that the first request - the insecure one - can be intercepted and modified. This is exactly the sort of thing we're trying to protect people from in the first place and whilst every request after that first one the gets 301'd is good, we still need to protect it as well. Which brings us to HSTS.

3. Add HSTS

HSTS stands for "HTTP Strict Transport Security" - and it's awesome! I've written about it before in depth so I won't repeat everything here but for the sake of completeness in this post, we'll go through it again briefly.

The 301 situation left us with a risk in that any insecure requests could still be read by someone with access to the traffic. Sensitive data like any cookies sent with the request could be read and then the response itself could be manipulated. It's sub-optimal. HSTS changes that (to a degree) by way of a simple response header such as the one on this very blog:

Strict-Transport-Security: max-age=31536000

When returned over a secure connection, this header tells the browser that for the next 31,536,000 seconds (that's one year's worth of seconds), it may not make an insecure request to the site. If (for whatever reason), the browser then makes an insecure request - say by me explicitly typing into the address bar - this happens:

The 6-Step

What you're seeing here an HTTP 307 "Internal Redirect" followed by a secure request. This is the browser upgrading the request internally before sending it out over the wire. The really neat thing about this is that it avoids the problem we just had with the 301 where we could keep issuing insecure requests. Or does it?

The remaining problem with this model is known as "Trust On First Use" or TOFU; the browser needs to get one good request without it being intercepted in order to get the response header in the first place. This is where preload comes into play and you can see it in action on HIBP as follows:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

Now this looks very similar to before except for two new directives:

  1. includeSubDomains does exactly what it sounds like it does
  2. preload enables browser vendors to bake it into the browser itself

That second item is the key because it means you can then head over to and submit the site for preloading. That service is run by the Chromium Project and all the major browser manufacturers use sites submitted there to ensure that their browsers can't serve any content from those domains insecurely. Ever!

And just in case you're wondering about browser support for HSTS, it's very broad:

The 6-Step

Now, why don't I preload this very blog? Because amongst the requirements for preloading is this one:

The includeSubDomains directive must be specified

Which is problematic because I need to run over the insecure scheme so that when I run my workshops I can demonstrate what goes wrong when you don't get your HTTPS right! Ideally, I need to put this on a standalone domain so that I can get preloaded and certainly that's on the cards.

In terms of implementing HSTS, it's just a response header you'll need to return upon an incoming request to the site. For the ASP.NET folks, check out NWebsec on NuGet which is made by fellow developer security MVP André Klingsheim. That makes it dead simple to setup via the web.config like so:

<strict-Transport-Security max-age="365" includeSubdomains="true" preload="true" />

Or just like with the 301 redirects, you can easily add it via a web interface if you're routing your traffic through Cloudflare:

The 6-Step

Now you definitely want to be confident that you're always going to be serving traffic over HTTPS all of the time once you go down this route (especially with preload) but frankly, that's where you should be by now anyway.

Oh - and just as a very timely example of why HSTS with preload is important, the security news earlier this week was dominated by the KRACK attack against wifi networks. On that page, Mathy shows how traffic could be intercepted using a combination of his WPA2 attack and sslstrip. This is only possible because there is no preloaded HSTS (even HSTS without preload would still save many returning visitors). With this in place, the traffic can't be downgraded to force HTTP connections as the browser simply won't allow it. Frankly, the whole page load experience is pretty terrible:

For all the press this exploit has received over the last couple of days, it's amazing how simple it is to secure individual websites against it, you just need traffic served securely and a single response header plus preload.

4. Change Insecure Scheme References

And now for the hard part. Actually, let me talk about what is traditionally the hard part then I'll talk about how to make it easy.

Once you go HTTPS, everything on the page must be served securely. Remember the SEO bloke in his pyjamas earlier who told people not to use HTTPS then deleted my comment saying you should use HTTPS then deleted his video saying you shouldn't use HTTPS then went and implemented HTTPS on his own site but got browser warnings? Visitors to his site were loading the page over HTTPS but didn't get a padlock, didn't get the "Secure" text next to the address bar and didn't get the HTTPS scheme represented in green. In fact, the browser security indicators looked exactly like this example from

The 6-Step

This is simply because the image you see on the page above was embedded insecurely. (Incidentally, the image still loads as it's "passive content" in that it can't change anything, but try it with "active content" like a script tag and it will be blocked from loading in the first place.) Here's the root cause:

<img class="mixed" src="" alt="HTTP image">

It's simply embedding the image insecurely and there are many different ways to easily fix this:

<img class="mixed" src="" alt="HTTP image">

<img class="mixed" src="//" alt="HTTP image">

<img class="mixed" src="image.jpg" alt="HTTP image">

Any one of these will immediately solve the problem... with that one image embedded in that one location. But that's not how web pages operate, rather they have a raft of images, style sheets JavaScript files and all sorts of other content embedded not just once on the one page, but all over the place. When moving to HTTPS, these need to be fixed which yes, means a lot of changes.

But this is where Cloudflare can help again (you seeing a theme here?) and it does so with what they call Automatic HTTPS Rewrites:

The 6-Step

The explanation in the image above is pretty self-explanatory and once again, when an intermediary can control the traffic then they can do some pretty awesome things with it. This is more intelligent than simply changing every HTTP source to HTTPS though:

Only URLs that are known to support HTTPS will be rewritten. We use data from EFF’s HTTPS Everywhere and Chrome’s HSTS preload list, among others, to identify which domains support HTTPS.

This is great and it helps enormously, but whether you're manually fixing insecure scheme references yourself or delegating the work to Cloudflare, there's another problem: what happens if another service you're using makes an insecure request? I mean you might embed, say, Disqus securely on your site but what happens if their JavaScript which you're embedding from their site then loads an image over HTTP? There's another easy answer for that one, and it involves another header.

5. Add The upgrade-insecure-requests CSP

Let's go back to for a moment and play with the "upgrade" demo:

The 6-Step

It looks normal, right? But check out the HTML source:

<img id="http-vs-https" src="" title="This is an image with an HTTP source location specified. If upgrade-insecure-requests is working, the source should be rewritten to HTTPS. The image will vary depending on the outcome.">

I love the title attribute on that image because it saves me the work of explaining it :) This is all achieved by virtue of a Content Security Policy response header, otherwise known as a CSP:

Content-Security-Policy: upgrade-insecure-requests

What makes this awesome is that even if you screw up every single reference for embedded content on your site, this header will automatically fix it for you. Of all the things I'm imparting in this blog post, this is the one that I find is least frequently known and has the greatest impact on how easy it can be to implement HTTPS.

But as I alluded to at the end of the previous section on updating all your links, the joy of this CSP is that it can also fix downstream dependencies loaded insecurely by other services. Last year I wrote about how Disqus caused browser warnings to be shown on my blog and per the earlier example, it was simply because they screwed up and started embedding content insecurely in a location well beyond my control.

My website actually runs on Ghost Pro which is a hosted SaaS model and that meant I had zero control over headers - I couldn't even add the CSP via Cloudflare. However, you can also embed a CSP as a meta tag and if you view the source of this very page, you'll see the following:

<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">

I actually use their Code Injection feature to add this in so I didn't even need to change my template:

The 6-Step

And just to add another Cloudflare angle to this, it's not yet available at the time of writing, but Cloudflare Workers should make this a breeze:

I'll certainly write more about this feature once it lands because it's going to open up a world of opportunities and make features like a CSP in response headers dead simple to implement.

So, upgrade-insecure-requests is an awesome approach - what's not to love?! Well...

The 6-Step

This is frustrating, particularly for someone as Microsoft aligned as myself, and believe me when I say I continually raise this issue at every opportunity. People using Microsoft's browsers simply won't get the benefit of upgrade-insecure-requests so when we load up that test site in Edge, we see this:

The 6-Step

Edge is actually missing the padlock which is usually present on an HTTPS page which loads all child content securely:

The 6-Step

This is one of the key reasons why it's still so important to get embedded content referenced over the secure scheme. However, there's another angle to all this which can help you complete that happy path and yes, it even makes for a happy experience on Edge!

6. Monitor CSP Reports

One of the really neat features of content security policies is that you can configure them to report violations back to a URI of your choosing. This is (currently) achieved by using the report-uri directive and it means that when a CSP is violated, you can get a neat report of what went wrong submitted directly by the user's browser. You can see precisely what this looks like via this little demo I set up:

The 6-Step

Let's break down what's happening here: the page is obviously requested over HTTPS but the image and JavaScript are both embedded insecurely like this:

<img src="" />
<script src=""></script>

Yet we see green bits and a padlock in the address bar which means that everything is secure. It's secure because of the upgrade-insecure-requests CSP discussed above:

Content-Security-Policy: upgrade-insecure-requests

So far that's nothing new, but now check out the other header I've added:

Content-Security-Policy-Report-Only: default-src https:;report-uri

This is a "report only" CSP which means that it doesn't enforce the policy, it merely reports on violations. The content is then set to declare that default-src (so basically the default policy for everything) requires that content be loaded over HTTPS. When an asset is found not loaded over HTTPS, a report is issued to the address after the report-uri directive (I'll come back to the significance of the host name a bit later). By using a CSP in conjunction with a CSPRO (report only), we can get the benefit of both the insecure requests being upgraded and then reported. What that means is that you see the following network requests when loading the page:

The 6-Step

You'll see the initial request for the page itself followed by a request for the image which was embedded over HTTP yet had the request upgraded to HTTPS and then one for jQuery, again embedded over HTTP which was then upgraded. Then there's the two "enforce" requests which we're seeing because that word is the last part of the path in the report-uri directive earlier on. There's 2 requests because every single CSP violation causes 1 to be fired. These are POST requests and they contain a JSON payload explaining precisely what went wrong:

      "original-policy":"default-src https:;report-uri",

This is the one for the image and we can see from the JSON above that the blocked-uri was an HTTP request made to the image and that the document-uri (the page that caused the violation) was our test page. I also clicked through to that page from this blog post so you see the referrer being the URL of this page too. This has everything you need to now track down the root cause of the mixed content and fix it at the source. And here's another cool thing - the reporting works in Edge too:

The 6-Step

Even though neither of Microsoft's browsers understand upgrade-insecure-requests, Edge implements most of the CSP level 2 spec which means it's able to understand (and report on) the CSPRO which demanded HTTPS. No, this won't cause the content to be loaded securely (the image still loaded over HTTP and the script didn't load at all because it's "active content"), but now that you're getting reports you can actually do something about it. Which brings us to the final part of the "Happy Path" - Report URI.

The thing with CSP violation reports is that they can get seriously voluminous. Think about it - you have a site template with a few assets embedded insecurely and each page inherits from that template. Then you have thousands or tens of thousands or however many people coming by the site each day. It's a lot of requests. Furthermore, they're all just the JSON you saw earlier on, you still need to do something useful with that in terms of parsing and reporting on it.

This is where the Report URI service comes in. This is a project stood up by my good friend Scott Helme a few years back (Scott is behind the Alexa Top 1 Million reports I referenced earlier). He makes this service available to anyone who wants to lodge their reports there and you can get into it for free. There's commercial plans available if you want to help Scott support what he does, but the free tier is enough to start working out what's going on in terms of mixed content.

For example, here's a sample report for that demo site:

The 6-Step

So now for me as a site owner, I can review the entries on Report URI and see precisely where I've got mixed content on my site. It's all aggregated into the one place which makes it dead simple to review before heading into the code of the site and fixing any broken references. But of course, with upgrade-insecure-requests, things would only break in the Microsoft browsers anyway so the vast majority of people they've already had a seamless experience but their browsers have kindly done the work of letting me know there's a problem anyway.

I love the way this rounds out the "Happy Path": get a free cert, implement the 301 redirect, add HSTS, change your insecure references, use the CSP to fix any of the ones you've missed in non-MS browsers then finally, sit back and watch for any violations by reporting to the free Report URI service. HTTPS doesn't have to be hard, you just have to follow the happy path 😀

Gotchas and Other Considerations

This can be a super-easy process and for the vast majority of sites out there, you can knock this off in an afternoon if not within an hour. But I don't want to overly trivialise it either and there are certainly various gotchas along the way.

For example, if you run a website like Stack Overflow which is the 55th most traffic'd in the world, things are somewhat trickier. Complex applications running at scale introduce all sorts of other issues.

Even for smaller apps, putting them behind Cloudflare can throw some curve balls. For example, when I moved HIBP behind them I needed to update code that records the client's IP address because the nature of a reverse proxy means I was seeing the address of their edge node. Instead, I needed to look for the CF-Connecting-IP request header they add to the inbound traffic and refer to that instead.

As for HSTS, it's awesome but it's also a bit of a one-way street, especially once using preload. You should use HSTS and you should preload, but I wouldn't begrudge anyone wanting to give it a few weeks in between going HTTPS, adding HSTS with a short max age then eventually preloading.

There are other aspects of implementing HTTPS comprehensively I haven't touched on here either. For example, flagging cookies as "secure" so they can't be passed over an insecure connection and also updating other references to the site to use HTTPS. Think about things like social media channels and email footers - you really want them referencing the secure scheme as well. Mind you, once HSTS is rolled out with preload that matters a lot less, but you still need browser vendors to bake in that preload list, roll out updates and customers to actually take them.

A classic argument against HTTPS is that every external service you embed must also support the secure scheme. It's extremely rare that this is a problem in this day and age, but there was a time when the likes of Google Adwords didn't and obviously that would present all sorts of dramas. Do check those dependencies if you're at all uncertain they can be embedded securely.

Cyclical redirects is another one I've seen a few times and some odd things can happen there. I've seen cases where someone is redirecting from HTTP to HTTPS but the HTTPS call then redirects back to another path on HTTP. It's a screwy scenario usually solved pretty quickly, but especially once combined with things like a canonical URL redirect (i.e. redirects to, then it can catch people out.

And finally, if you're going down that Cloudflare route as I've so emphatically suggested, I'd begin by making sure that everything plays nice over HTTP first. Give the site a comprehensive test and look for those little gotchas like the client IP I mentioned earlier first. It's a lot easier to iron those out first and then do the HTTPS thing than it is to try and troubleshoot everything at once.

Further Resources

Even with insurmountable volumes of evidence speaking to why HTTPS is important, there remain naysayers. Some of the comments on the blog posts I've linked to still astound me with some folks even suggesting it's a Google conspiracy theory. Frankly, I don't care because this isn't a negotiation; you either get HTTPS or your users get told the site is insecure. Don't bother arguing that you don't need HTTPS, invest that energy reading instead.

A couple of additional favourite resources for those naysayers are and HTTP vs HTTPS. The latter in particular is a favourite as it totally turns conventional wisdom about the speed cost of HTTPS on its head. No, it's not fair and no, I don't care because as I've said before, I just wanna go fast.

And finally, I'll leave you with a resource of my own which has proven pretty popular:

There's 3 and a half hours of HTTPS training on Pluralsight and this course rocketed to as high as the 8th most popular out of a library of more than 6,000 recently. It's still rating 4.9 stars out of 5 too so if you really want to get into detail, check that out too. It turns out that people can actually find HTTPS pretty interesting 😀

Read the whole story
36 days ago
Central Indiana
Share this story

Equifax Breach Fallout: Your Salary History

2 Comments and 5 Shares

In May, KrebsOnSecurity broke a story about lax security at a payroll division of big-three credit bureau Equifax that let identity thieves access personal and financial data on an unknown number of Americans. Incredibly, this same division makes it simple to access detailed salary and employment history on a large portion of Americans using little more than someone’s Social Security number and date of birth — both data elements that were stolen in the recent breach at Equifax.


At issue is a service provided by Equifax’s TALX division called The Work Number. The service is designed to provide automated employment and income verification for prospective employers, and tens of thousands of companies report employee salary data to it. The Work Number also allows anyone whose employer uses the service to provide proof of their income when purchasing a home or applying for a loan.

The homepage for this Equifax service wants to assure visitors that “Your personal information is protected.”

“With your consent your personal data can be retrieved only by credentialed verifiers,” Equifax assures us, referring mainly to banks and other entities that request salary data for purposes of setting credit limits.

Sadly, this isn’t anywhere near true because most employers who contribute data to The Work Number — including Fortune 100 firms, government agencies and universities — rely on horribly weak authentication for access to the information.

To find out how easy it is to view your detailed salary history, you’ll need your employer’s name or employer code. Helpfully, this page lets you look that up quite easily (although if you opt to list employers alphabetically by the fist letter of the company name, there are so many entries for each letter that I found Equifax’s database simply crashes half the time instead of rendering the entire list).


What’s needed to access your salary and employment history? Go here, and enter the employer name or employer code. After that, it asks for a “user ID.” This might sound like privileged information, but in most cases this is just the employees’s Social Security number (or a portion of it).

At the next step, the site asks visitors to “enter your PIN,” short for Personal Identification Number. However, in the vast majority of cases this appears to be little more than someone’s eight-digit date of birth. The formats differ by employer, but it’s usually either yyyy/mm/dd or mm/dd/yyyy, without the slashes.

Successful validation to the system produces two sets of data: An employee’s salary and employment history going back at least a decade, and a report listing all of the entities (ostensibly, the aforementioned “credentialed verifiers”) that have previously requested and viewed this information.

Once you’re successfully “authenticated,” the system asks you to change your PIN to something more secret than your birthday. When the default PIN is changed, The Work Number prompts users to select a series of six challenge/response questions, which Equifax claims will “improve the security of your data and create an extra layer of protection on your account.”

Unfortunately, consumers whose employee history is stored by this service effectively have no privacy or security unless they possess both the awareness that this service exists and the forethought to access their account online before identity thieves or others do it first.


The Work Number does allow employers to opt for TALX’s “enhanced authentication” feature, wherein after logging in with your employer ID and PIN (often the last four digits of an SSN plus the birth year), the system is designed to require the requester to respond to an email at a work address or a phone call to a work number to validate the login.

However, I did not find this to be the case in several instances involving readers whose employers supposedly used this enhanced authentication method. In cases where corporate human resources departments fail to populate employee email addresses and phone numbers, the system defaults to asking visitors to enter any email address and phone number to complete the validation. This is detailed here (PDF), wherein The Work Number states “if you do not have the required phone and e-mail information on file, you will be prompted to update/add your phone numbers/email addresses.”


Worse yet, while companies that use this service tend to vary their approaches to what’s required in terms of user IDs and PINs, a great many employers publish online detailed instructions on how to fill out these various forms. For example, the State of California‘s process is listed here (PDF); instructions for the Health Resources & Services Administration (HRSA) are here; employees at the National Institutes of Health (NIH) can learn the steps by consulting this document (PDF). The process for getting this information on current and former UCLA employees is spelled out here. There are countless other examples that are easy to find with a simple Internet search.

Many readers probably consider their current and former salaries to be very private information, but as we can see this data is easily available on a broad spectrum of the working population in America today. The information needed to obtain it has been widely compromised in thousands of data breaches over the past few years, and the SSN and DOB on most Americans is for sale in a variety of places online. In short, if you can get these details from Equifax’s online service, so can anyone else.

Fortunately, you can reduce the likelihood that an acquaintance, co-worker, stalker or anyone else can do this by claiming your own account, changing the PIN and selecting a half-dozen security questions and answers. As always, it’s best not to answer these questions truthfully, but to input answers that only you will know and that can’t be found using social networking sites or other public data sources.

I could see this service potentially helping to create a toxic workplace environment because it offers a relatively simple method for employees to glean data about the salaries of their co-workers and bosses. While some people believe that companies should be more transparent about employee salaries, this data in the wrong hands very often generates a great deal of resentment and hostility among co-workers.

Employers who use The Work Number should strongly consider changing as many defaults as possible, and truly implementing the service’s enhanced authentication features.

October is National Cybersecurity Awareness Month, and as such KrebsOnSecurity will continue pointing readers to similar services that let anyone access your personal data armed with little more than static identifiers about you that should no longer be considered private. Although some readers may take issue with my pointing these out — reasoning that I’m only making it easier for bad people to do bad things — it’s important to understand that knowledge is half the battle: Planting your flag before someone else does is usually the only way to keep others from abusing such services to expose your personal information.

Related reading:

USPS ‘Informed Delivery’ is Stalker’s Dream
Student Aid Tool Held Key for Tax Fraudsters
Sign Up at Before Crooks Do It For You
Crooks Hijack Retirement Funds via SSA Portal
Social Security Administration Now Requires Two-Factor Authentication
SSA: Ixnay on txt msg reqmnt 4 e-acct, sry

Read the whole story
46 days ago
Really, people? Sigh.
Central Indiana
Share this story
1 public comment
43 days ago
theworknumber has been down for "maintenance" since the weekend, and of course this happens when i actually need to get data from it
Dallas, Texas

How to teach technical concepts with cartoons

1 Comment and 2 Shares

People sometimes tell me these days “wow julia, you are so good at drawing, it is so cool!”

I think this is kind of funny because, this is what happens when I try to draw animals.

But! There actually is a skill to explaining technical concepts to people with drawings. And I think I’ve become pretty good at that skill! It is just a different skill than like “drawing an elephant that looks like an elephant”

This post is about a few patterns I use when illustrating ideas about computers. If you are interested in using drawings to teach people about your very favorite computer topics, hopefully this will help you!

Let’s talk about how to structure cartoons and how to translate computer concepts into pictures!

cartooning isn’t about drawing skills

Just to emphasize it again – this is basically the entire visual vocabulary I use.

I think of tech cartooning as being about cartooning skills! I need to be good at:

  • using a very small number of words to express an idea (for example this mutexes cartoon has maybe 60 words in it)
  • breaking something down into simple concepts (“what are the key ideas you need to understand DNS?”)
  • staging relevant scenarios (“what’s a good example to use to show how a mutex works?”)

Here are some tactics I like to use when drawing!

personify the characters

I do a lot of personification/anthropomorphization – I’ll take a system and turn it into a cast of characters who talk to each other. For example, here’s a scene from Kubernetes: the kubelet component is talking to the API server

This is useful because

  1. it emphasizes that the “kubelet” and the “api server” (whatever those are) are important concepts in Kubernetes
  2. it shows you that those two components communicate with each other
  3. it’s more fun than reading a paragraph saying the same thing

Here’s part of the cast of characters from my networking zine: (a laptop! a router! an operating system! a program!)

Taking a complicated computer system and breaking down “ok, these are 3 main important characters in this system” is incredibly useful.

show a scene

The next step after just making your characters is to put them into scenes and make them interact with each other! So once you’ve established “the important characters here are the laptop, the DNS server, and the HTTP server”, you can show how they actually work together in real life.

Here’s a scene with two humans talking:

and one with two programs who are both using the same mutex:

I think this scene (with program 2 thinking “not my turn yet”) is a pretty clear way to explain what happens when a mutex is in use, and I think it’s faster to understand what’s going on than if you read a paragraph explaining the same thing.

make a list

I make a LOT of lists (for example, this post itself is a “list of things I’ve learned about making comics :)“). A few examples:

Here’s part of a list of networking tools and what they’re for

a list of attributes of a Unix process

and a list of strategies for asking good questions

A few things I love about making lists:

  • you can make a list of steps (step 1! step 2! step 3!)
  • it’s a really clear structure and so they’re easy to understand
  • it’s a nice way to teach someone something new (maybe you list 10 interesting things, and they only knew about 7 of them!)
  • none of them claim to be exhaustive (I didn’t say those were all the attributes of a process!)
  • sometimes I learn surprising things while making them. For example I started listing Linux networking tools and I was really surprised by how many of them there were (I ended up listing 24 of them!) (here’s the whole list)

make a diagram

A big part of the joy of hand drawing comics is that I can really easy make diagrams to explain what I mean! No fiddling with LaTeX or graphviz or anything.

Here’s part of a diagram I made to illustrate memory fragmentation:

and a slightly more involved diagram showing the structure of a UDP packet:

I love that I can use arrows / colours to emphasize things I think are important or give extra information. Like in this UDP packet diagram I greyed out fields that I thought were less important (like the “fragment offset”, which is definitely less important to understand than the source IP address).

make a joke

Computers are often really confusing and surprising. This can be kind of frustrating (“what is my program even doing?!!?!“) and also kind of funny! I think all the weird stuff that happens is part of the joy of computers! So sometimes I try to make jokes.

Here’s the Kubernetes scheduler all worried because it noticed a pod that it hasn’t been scheduled. (scheduler: “OH NO! I was supposed to have done that already! julia will be mad!”)

and a silly “C is for linearizable” joke (because the C in “CAP theorem” stand for “consistent”. But “consistent” is a pretty unclear term, so it’s more precise to say that it sounds for linearizable. So confusing!“)

just write some text

I like using cartoons but sometimes I’ll just write a paragraph. Here’s the start of a page about dstat:

This basically just says “every second, dstat prints out how much network & disk your computer used that second”. I could have typed that! But I think writing it by hand emphasizes like “no, this is something I really love, I love it so much I wrote it out by hand and made a picture to show you!”

paste some computer output

Sometimes I want to paste and discuss some output you might see on a computer. For example, when I made my strace zine I realized that a lot of strace output is really confusing. I wanted to paste some actual strace output to talk about!

Luckily that is really easy to do in a drawing, because you can just put anything you want in it!

trace icons/logos

At the beginning I said “I can’t draw well”, which is true! But I can trace things. It’s a fun way to make up for my lack of drawing skills.

It’s useful sometimes to include logos / icons! For example here are versions I traced of the Kubernetes logo, the Recurse Center logo, Tux (the linux penguin), and a cat. The cat isn’t anybody’s logo as far as I know.

The hand-traced versions of these logos are kind of wobbly and imprecise in a way that is pretty satisfying to me, I think they look cool.

designing your comics

You have a blank sheet of paper in front you, and some information you want to convey! How do you do it? Having a few structure patterns really helps. Here are some examples:

Here’s one way of making a list:

A list of many small things:

and yet another list, here a list of steps. This one is organized into numbered panels!

This one is more of a normal comic and less of a list – it’s visually laid out with squares/rectangles like a comic, and tells a bit of a story.

And finally this one is pretty unstructured. Personally I find this one a bit hard to to read/follow, I think having more structure than this is easier.

I think panels are a popular way of structuring comics for a reason, they help split up your drawing and make it clear what order the comic should be read in.

just making something a cartoon doesn’t necessarily mean it teaches what people need to know

I’m going to pick on another cartoon a bit here which I don’t really like to do but I need an example :).

There’s an extremely adorable guide to Kubernetes called The Children’s Illustrated Guide to Kubernetes.

I think this cartoon is cool and introduces a lot of important ideas. But for me personally I couldn’t understand how Kubernetes worked at all until I understood the role of etcd in Kubernetes (all the Kubernetes state is stored in etcd and every other Kubernetes component is stateless). And this cartoon doesn’t mention etcd even once! So I don’t think this cartoon would really have helped me understand Kubernetes.

I think there are kind of 2 ways to use drawings to teach:

  1. draw diagrams / cartoons that make what you’re teaching more clear
  2. use drawings to make what you’re teaching more fun

Making concepts more clear and more fun are both great goals! Making things fun can be a good way to make people pay attention and make hard concepts seem less intimidating!

But when I’m working on illustrations I find it useful to think about whether my drawings are actually helping explain the concept or whether they’re just fun (like drawing a picture of a shark when talking about Wireshark!).

In this children’s illustrated guide to kubernetes, I think the drawings mostly serve to make the content seem more fun – almost all the actual content is in the text. I think if you removed all the giraffe drawings the document would contain basically the same information! This is not a bad thing necessarily but I usually like to have more informational content in my drawings.

tools that make it easy

The tools I use today to make these are (see this interview for more)

  • a Samsung Chromebook Plus (though any samsung tablet with an S-pen will work. Or an ipad with the apple pencil!)
  • the Squid app for Android (goodnotes for ipad is good too!)
  • that’s it!

Having a tablet I can draw on means I can really quickly draw something, click “share on Twitter” and immediately show it to the world. I definitely produce way more drawings with it than I did when I was working with pen and paper. And they look way better :)

drawings don’t have to be beautiful to be awesome

I started out by drawing things on paper with a pen / Sharpie and just taking pictures. They all looked way less good than everything I’ve posted above, but they were still really cool!!

For example here’s a very early drawing that I drew in pen on paper and posted to Twitter. Today I find this kind of janky & illegible but honestly when I posted it I got TONS of positive comments (evidence).

So drawings do not have to be beautiful and clean! They can be a sketchy thing you wrote on paper and that is okay.

how do you decide what’s a good subject for a tech cartoon?

Let’s take this comic on floating point I made last year! For that one, the steps were:

  1. Remember that I was really confused about floating key point until I learned a few key insights from Stefan Karpinksi. When I learned these things my mind was totally blown and it was so exciting!!!
    • a double is 64 bits. That means there are only 2^64 floating point numbers!!!
    • The smallest double after 2^52 is 2^52 + 1 (so 2^52 + 0.2 = 2^52).
    • This means you can’t have integers above 2^53 in Javascript
  2. Think “well, those three things are really simple, I could put them in a comic”
  3. Figure out how to organize them into panels of a comic!!
  4. Don’t draw more than one page.

Here’s the final floating point comic I came up with in this example

I organize a lot of my comics about some key insight / fact / surprising thing that it took me a long time to learn and was really useful to me.

Another example of this is this “how Unix permissions work” comic – like if you don’t know that ‘0644’ is a number in octal and why it maps to rw-r--r--, it’s hard to understand how permissions work. Here’s the comic about unix permissions.

you could make tech illustrations too

If you are interested in drawing tech cartoons, I hope this blog post gives you some ideas about how to do that! I’ve seen a lot of people making great illustrations about tech:

If you’re interested in making cartoons I’d really recommend the book Understanding Comics, by Scott McCloud, it’s a really incredible explanation of how comics work. I learned a ton from reading it.

A few other people who are doing great work in tech comics:

Read the whole story
50 days ago
Interesting tips.
Central Indiana
Share this story

USPS ‘Informed Delivery’ Is Stalker’s Dream


A free new service from the U.S. Postal Service that provides scanned images of incoming mail before it is slated to arrive at its destination address is raising eyebrows among security experts who worry about the service’s potential for misuse by private investigators, identity thieves, stalkers or abusive ex-partners. The USPS says it hopes to have changes in place by early next year that could help blunt some of those concerns.

The service, dubbed “Informed Delivery,” has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide, according to the Postal Service. U.S. residents can tell if their address is eligible by visiting

Image: USPS

Image: USPS

According to the USPS, some 6.3 million accounts have been created via the service so far. The Postal Service says consumer feedback has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any mail being delivered while they’re on the road.

But a review of the methods used by the USPS to validate new account signups suggests the service is wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions. KrebsOnSecurity has relentlessly assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, because of the weak KBA questions (provided by recently-breached big-three credit bureau Equifax, no less) stalkers, jilted ex-partners, and private investigators also can see who you’re communicating with via the Postal mail.

Perhaps this wouldn’t be such a big deal if the USPS notified residents by snail mail when someone signs up for the service at their address, but it doesn’t.

Peter Swire, a privacy and security expert at Georgia Tech and a senior counsel at the law firm of Alston & Bird, said strong authentication relies on information collected from multiple channels — such as something you know (a password) and something you have (a mobile phone). In this case, however, the USPS has opted not to leverage a channel that it uniquely controls, namely the U.S. Mail system.

“The whole service is based on a channel they control, and they should use that channel to verify people,” Swire said. “That increases user trust that it’s a good service. Multi-channel authentication is becoming the industry norm, and the U.S. Postal Service should catch up to that.” 

I also wanted to know whether there was any way for households to opt out of having scanned images of their mail sent as part of this offering. The USPS replied that consumers may contact the Informed Delivery help desk to request that the service not be presented to anyone in their household. “Each request is individually reviewed and assessed by members of the Postal Service Informed Delivery, Privacy and Legal teams,” the Postal Service replied.

There does not appear to be any limit on the number of people who can sign up for the service at any one address, except that one needs to know the names and KBA question answers for a valid resident of that address.

“Informed Delivery may be accessed by any adult member of a household,” the USPS wrote in response to questions. “Each member of the household must be able to complete the identity proofing process implemented by the Postal Service.”

The Postal Service said it is not possible for an address occupant to receive emailed, scanned images of incoming mail at more than one email address. In other words, if you wish to prevent others from signing up in your name or in the name of any other adults at the address, the surest way to do that may be to register your own account and then urge all other adult residents at the address to create their own accounts.

A highly positive story about Informed Delivery published by NBC in April 2017 suggests another use for the service: Reducing mail theft. However, without stronger authentication, this service could let local ID thieves determine with pinpoint accuracy exactly when mail worth stealing is set to arrive.

The USPS says businesses are not currently eligible to sign up as recipients of Informed Delivery. However, people running businesses out of their home could also be the target of competitors hoping to steal away customers, or to pose as partner firms in demanding payment for outstanding invoices.

Informed Delivery seems like a useful service for those residents who wish to take advantage of it. But lacking stronger consumer validation the service seems ripe for abuse. The USPS should use its own unique communications channel (snail mail) to alert Americans when their physical address has been signed up for this service.

Bob Dixon, the executive program director for Informed Delivery, said the Postal Service is working on an approach that it hopes to make available to the public in January 2018 which would allow USPS to send written notification to addresses when someone at that residence signs up for Informed Delivery.

Dixon said that capability will build on technology already in place to notify Americans via mail when a change of address is requested. Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 3,000 post offices nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

“Part of coming up with a mail-based verification system will also let us do some additional notification that, candidly, we just haven’t built yet,” Dixon said. “It is our intent to have this ready by January 2018, and it is one of our higher priorities to get it done by then.”

There is a final precaution that should block anyone from signing up as you: Readers who have taken my advice to freeze their credit files with the four major consumer credit reporting bureaus (Equifax, Experian, Innovis and Trans Union) will find they are not able to sign up for Informed Delivery online. That’s because having a freeze in place should block Equifax from being able to ask you the four KBA questions.

By the way, this same dynamic works with other services that you may not wish to use but which require you otherwise to plant your flag of identity to prevent others from doing so on your behalf, such as managing your relationship to the Internal Revenue Service online and the Social Security Administration. For more information on why you should get a freeze and how to do that, see this piece.

Update, 3:48 p.m. ET: Added bit about how a freeze can block someone from signing up in your name.

Read the whole story
51 days ago
Central Indiana
Share this story
Next Page of Stories