Retiring the Green Padlock

Matt Holt
10 min readAug 2, 2017

The infamous padlock in the URL bar of browsers has, for over a decade, been the tell-tale sign of a secure connection to a website. Understandably, most Web users don’t think in those terms, so browsers have added help text like “Secure” (which Chrome shows very prominently) or “Secure Connection” (which Firefox shows with a click) to try to convey the meaning of said padlock.

While this isn’t wrong, it can be misleading. Phishers increasingly use commodity domain-validated certificates (or even more exotic wildcard certs) to add legitimacy to their fake sites, so while a connection to a website may truly be secure, the site itself is a trap, and the innocuous lock gives a false sense of security. Worse yet, some phishing scams use Google AMP to get a legitimate-looking URL with a green padlock and an actual hostname of google.com in the URL bar.

Phishing isn’t the only problem. Genuine sites may store passwords inappropriately, sell your data to advertisers, or (unknowingly) run compromised servers which distribute malware or spy on users. In other words, saying a site is “secure” is a tall order, and the meaning of the word secure is ambiguous anyway. Does “secure” imply private? Does it mean safe?What exactly are you secure against, etc, etc?

Browsers essentially restrict their use of the word in this context to mean the connection between itself and the website, considering as well all the connections made for subresources and perhaps even the content of the page (such as login forms and credit card fields). But most users don’t know what this means. They don’t know that a website and a connection to that website are different things. They may not even know what a connection is. The current padlock icon does nothing to indicate a “connection” like the good-old days of dial-up:

We don’t see icons representing “connections” much anymore.

However, the modern Web is complex. It’s not sufficient to consider only the connection anymore when deciding if a site is secure. The browser is the user’s agent: it’s acting on behalf of the users, and the users must trust their agent to help them make good decisions as they navigate the Web.

There’s that word again: trust. Maybe we shouldn’t be trying to indicate security, but rather trust. Perhaps instead of communicating security, we should communicate risk. So, while the padlock remains an iconic indicator of security, consider instead a trust indicator to take its place.

The Trust Indicator

The Trust Indicator, which name I’ll use for the purposes of this fantasy, is designed to keep the strong aspects of the padlock — in that it still signifies whether the properties and credentials of all connections for the page are verified — while improving on its weaknesses mentioned above.

It can theoretically be a “drop-in” replacement to the padlock as far as end users are concerned and shouldn’t require any overhauls to browsers: everything a trust indicator needs is already built into the major browsers. It doesn’t depend on external services or lists like SmartScreen or Safe Browsing, and it can be used for any and all sites on the Web, not just the top N sites.

Users would not need (as much) training to interpret the Trust Indicator because it appeals to human aesthetic for communication, and the output is more intuitive than a slash through the scheme of the URL. It is also more descriptive than the presence or absence of a padlock. It conveys information about the context of a connection as well as the connection itself. It could even be extended to evaluate the actual site in more depth.

The Trust Indicator would appear in the same place in the URL bar as we’re used to:

Location of the Trust Indicator (which is itself not shown here)

The job of the Trust Indicator is to inform the user whether the page they’re viewing is trusted from the perspective of the browser, which is the user’s agent. It thus needs to make a decision, and it is limited to a purely technical perspective. The only way a computer can make an assessment is through technical measures. Even though the Trust Indicator can explain its decision by clicking on it, users will still have to employ their own sense for any higher-level synthesis.

Trust Levels

Like the green padlock, a trust indicator makes its decision based on the connection, credentials presented, and even the contents of the page (such as the presence of certain form fields). But a trust indicator also references browser history and how the page was accessed. These factors, carefully considered, lend themselves to one of these three conclusions:

  • Trusted
  • Not Trusted
  • Error

Each decision has its own color and shape. The colors stimulate emotions such as acceptance or warning, and the shapes aid those who cannot perceive color strongly or in design situations where color is limited.

The three trust levels
  • Trusted = green circle. This icon has no sharp angles or corners. The shape is soothing, and the color is a reassuring “OK” or “Go ahead”.
  • Not Trusted = orange triangle. Three sharp corners draw attention to itself as a warning indicator. The yellow-orange color implies lack of confidence in the site, but not necessarily that something is wrong. The user should be cautious and double-check the site address and mind their activities on this site.
  • Error = red octagon. Eight sides is reminiscent of US stop signs. The numerous jagged corners grabs attention as a blocker shape. Red signifies danger: this site is unsafe because something is technically wrong with this page or its connection.

I’ll show how these may look in the UI, but first let’s talk about how the Trust Indicator actually makes its decision.

Assessing Trust

What are the policies for deciding trust? It can vary; there’s likely multiple good (and bad) policies. The ideas I’m proposing here are just that: ideas. No doubt this needs a lot of discussion and scrutiny. These are just my jottings to get the pot stirring.

Also note: just as with the current security indicators, the rules/thresholds are in a period of transition. These guidelines are presented as what I would consider to be the ideal future, even if a generous transition period is needed in practice. It’s the overall ideas that I think are worth consideration here.

Conditions for TRUSTED

  • The page and all its resources must use modern TLS
  • Each certificate must be trusted and not expired
  • Each certificate must have valid OCSP either stapled or available
  • The page’s certificate must have a reasonably short lifetime (< 1 year?)
  • The host is in the user’s browser history from more than M minutes ago (the time constraint prevents phishers using redirects to merely add a hostname to the history)
  • Dock the trust score if the domain has more than 3 labels or 1 hyphen (perhaps with the exception of punycode encodings)

Conditions for NOT TRUSTED

  • Page and/or its subresources uses HTTPS, but the cryptography is weak
    -OR- a subresource does not use HTTPS.
  • Site (hostname) is not in browser history
  • Page was accessed from outside the current tab OR from an insecure (untrusted) site either within or outside the current tab
  • Page emits more than 2–3 redirects before settling
  • Dock the trust score if the same path and/or page title was found in browser history but for a different hostname
  • Boost the trust score if the domain was typed manually (a lot) or pasted (a little less)

Conditions for ERROR

  • Page is plain HTTP
  • Something is wrong with the TLS connection or the credentials (bad key exchange, expired or untrusted cert, lack of OCSP, name mismatch, etc)

Hopefully some of the advantages of this are obvious. For example, phishing sites are rarely accessed by manually typing in the address. That’s why accessing the page from an external tab or application is trusted less than a page whose address was typed out.

Exactly how browsers combine these conditions (the && and ||) and how much they weigh each one in relation to others is left as an exercise to the implementer. (These details will be super important at that time.)

The general idea is to increment the trust level based on the properties of that particular page access — not just the connection isolated from its context.

Considerations for First Visits

These guidelines are a little harsh on first visits to legitimate sites. To be Trusted, a site has to be in the browser history for some time, making first visits to genuine sites marked as Not Trusted, which no site owner would like.

To remedy this, we could introduce a fourth trust level, Gaining Trust, or maybe New Trust. The icon would be a green circle like Trusted, but not filled in. The next time the user visits the site (a session), it will be fully Trusted. However, earning the green circle at all — even New Trust — requires that the page be accessed in a way that is not suspicious. In other words, the other conditions still apply to New Trust.

An alternative to the empty circle is to hide the trust indicator entirely for that session. The “https” in the URL could still be green, but lacking a trust indicator might still be a jarring omission after being used to seeing it almost everywhere.

The UI

“Just show me the thing!” Okay, okay: here’s some mockups.

We could also bring back some familiar iconography if that is better:

Extending the Trust Policies

There are more conditions that could be considered. For instance, a user might wish to be warned about a site in the future by blocking it manually, much like blocking phone numbers. Sure, browser extensions already do this, but this could be baked into the trust policy and used in evaluating future decisions, resulting in an Error trust level.

You could also augment these policies with extended validation that happens asynchronously, so as not to block or slow down page loads. (I’m talking beyond TLS things like revocation checks). Such validations might include querying external blacklists, CT logs, domain registration/renewal dates, and correlating untrusted sites with their IP space and web host. Of course these all have their issues, but I’m simply suggesting the capability to extend the trust policies is there.

Quantification and Global Consistency

One other issue with this is that one user may not see the same trust level as another, even the same page at the same time. This is because the conditions for being fully trusted rely on an individual’s browser history and how the page was accessed.

To address this, trust levels could be reduced to a number in [0, 100]. Then two values would be computed under the hood: a “global” value which is presumably the same for every client making connections with that server and does not depend on an individual’s specific history or page interaction. (This would be exposed only by developers for debugging situations.) A final trust score would be the value that is revealed to users who click on the Trust Indicator for more information, breaking it down if desired. A brief summary of the factors above as well as their component scores could be presented. In this way, developers could still reference a “global” value that is theoretically consistent for everyone.

Sorry, I don’t have mockups for this (yet?), or an algorithm for producing the numeric value. Details…

In addition, it would be nice if each browser would agree to implement the same rules and keep them in sync, so we won’t have to make different interpretations depending on the browser being used like we do now when we see the security padlock.

Closing Thoughts

Browsers actually do a pretty good job at helping to keep everyday users safe and in-the-know while browsing, but as the Web gets more complex, we’ll have to evolve our communication just a bit. I feel like something along these lines is a good step in the right direction, as unofficial and hypothetical as it is.

Does the Trust Indicator solve all the problems? Nope. Are there still ambiguities about what that single little picture means by the URL? Yep. And as mentioned, a lot of implementation details are left to be desired. But hopefully this hypothetical, high-level framework proposal lays groundwork for better explanations and protections in the future.

If you liked this post, you can take action. Start by putting your own site on HTTPS and automate the renewal of your certificates. I recommend the Caddy web server for this purpose. And we’re always looking for sponsorships from those who want to give the gift of privacy.

I hope this post has stimulated some thoughts and motivation for working to make interacting with the Web better for people everywhere.

Special thanks to Vincent Lynch and Eric Mill for their feedback on a draft of this article.

--

--