FIDO Alliance – Two-Factor Authentication framework

The FIDO Alliance is a conglomerate of top technology corporations (Microsoft, Google, Oberthur, NXP, Paypal, etc…) aiming to create standardized enhanced authentication with specific goals of “Passwordless Authentication” (UAF) and “Second Factor Authentication” (U2F).  Essentially, they want to be able to framework how companies provide secure access to their web resources as well as how users prove their identity to the companies.

U2F describes how compliant hardware should behave in order to increase security by introducing strong Two Factor Authentication into the mix (“something you have” + “something you know”).  Many companies are working towards this goal as well but without any sort of unifying platform, thus causing extreme vendor dependence for the companies deploying these solutions.

 

The “Test Of User Presence” Button

One snippet of the specification draft that really caught my eye was the requirement for compliant hardware to have a button on the device itself.  This button is how a user would “activate” the U2F device for use as an authentication device.

The U2F device device has a physical “test of user presence”. The user touches a button (or sensor of some kind) to “activate” the U2F device and this feeds into the device[.]

In summary, the user will have to touch a button to register, and may also be warned by the browser.

 

Site-Specific Public/Private Key Pairs

The U2F device and protocol need to guarantee user privacy and security. At the core of the protocol, the U2F device has a capability (ideally, embodied in a secure element) which mints an origin-specific public/private key pair.

A U2F device does not have a global identifier visible across online services or websites.

Each website a user “enrolls” with using U2F will trigger the hardware device to generate a private/public key pair specific to that website.  Later, when the user wants to go to the website, the user will “activate” their hardware using a button or sensor and then the hardware device will digitally sign some data and send it back to the website for verification (to ensure, using the public key, that the user is in possession of the correct private key for that website).  This not only provides data/communication security but provides some anti-phishing benefits (if a user goes to a website posing as Google.com and are tricked into providing a password, for instance, it’s a fairly useless token as the digital keys are still preserved on the real Google.com to access the user’s actual information).

While I applaud their suggestion of using a hardware “secure element” or “secure cryptoprocessor”, I would be over-the-moon if this was an actual requirement in the specification and not just an “ideal” implementation of the specification.  There’s really very little argument for not using smartcard technology and I truly believe that not requiring a secure cryptoprocessor on-board weakens the overall structure of the U2F framework.

Also, who uses the phrase “mints” when discussing the generation of cryptographic keys?  Any-who…

 

Anti-MITM (Man-In-The-Middle) Protection

Say a user has correctly registered a U2F device with an origin (website/service) and later, a MITM on a different origin tries to intermediate the authentication. In this case, the user’s U2F device won’t even respond, since the MITM’s (different) origin name will not match the Key Handle that the MITM is relaying from the actual origin.

If the user’s hardware device receives an incorrect/unexpected “hash” (essentially the website hostname + protocol + port), it will simply not respond.  This will stop the most common phishing/MITM attacks right off the bat.  That’s the beauty of defining a specification that describes both the user-side authentication but also the service-side operations.

If an attacker were to compromise your web browser itself and successfully tricked U2F into thinking they were, in fact, Google.com, they really wouldn’t get very far.  The U2F device would receive a “valid” request to sign data to Google.com and the device would do just that… it would use its private key to sign some data and see if the endpoint (Google.com) can verify it.  Even if “Google” responded that the signature was valid, the result would be that the U2F device would initiate fully encrypted (PKI) communication, which the fake Google would not be able to understand/decrypt because it does not have the legitimate Google’s private key in order to respond/communicate.

 

Multiple Users Can Use One U2F Device Per Site

Note that a U2F device has no concept of a user – it only knows about issuing keys to origins. So a person and their spouse could share a U2F device and use it for their individual accounts on the same origin. Indeed, as far as the U2F device is concerned the case of two users having accounts on the same origin is indistinguishable from the case of the same user having two accounts on that origin. Needless to say, the general case where multiple persons share a single U2F device and each person has accounts on whatever origins they choose is similarly supported in U2F.

Sharing accounts/devices is never a great idea, but this particular framework does provide a decent “no knowledge” approach to multi-user/single-device support.  Should be interesting to see how it’s actually implemented though.

 

One User Can Use Multiple U2F Devices Per Site

U2F does not limit the user to have a single device registered on a particular account on a particular site. So for example, a user might have a U2F device mounted permanently on two different computers, where each U2F device is registered to the same account on a particular origin – thus allowing both computers to login securely to that particular origin.

Let’s just hope nobody really pays attention to the scenario of the guy permanently mounting his U2F device on multiple computers… I can see about 3 ways that could go horribly awry.

 

The Lack Of Commitment / Foresight

When I hear about companies creating authentication frameworks or solutions, I always examine what their security “spread” is.  What is that company/framework willing to sacrifice in order to please everyone whilst attempting to maintain a high level of security?  In U2F’s case, it seems they’re willing to do a little sacrificing in order to please token manufacturers producing low-cost hardware.  This is not necessarily a bad thing, but it might be a Bad Idea when setting a global standard for strong authentication.

With these considerations in mind, a relying party needs to able to identify the type of device it is speaking to in a strong way so that it can check against a database to see if that device type has the certification characteristics that particular relying party cares about. So, for example, a financial services site may choose to only accept hardware-backed U2F devices, while some other site may allow U2F devices implemented in software.

In practice, we do not want to prevent other protocol compliant vendors, perhaps even those without any formal secure element, perhaps even completely software implementations. The problem with these non-secure-element based devices, of course, is that they could potentially be compromised and cloned.

Quite right… so why allow it?  Allowing for less secure solutions to muddy up the specifications really rubs me the wrong way.  Commit!  Don’t mess around and try to be all things to all people.  Cut out the weaker solutions that do not rely on an inherently secure platform so that it does not drag down the entire framework later on.  It’s probably not the best way to change the market, but by requiring adopters to utilize strong, albeit more expensive, hardware, it will inevitably drive the price of these devices down as this framework is popularized.

Pin It on Pinterest

Share This