(Originally posted here)
Some time ago, we found TypingDNA. It was a different approach to MFA (multi-factor authentication) or SCA (strong customer authentication), and so we got interested in giving it a try between ourselves and see how it works.
TypingDNA records typing information. This information represents how the user types, and then it’s used to store and learn about the user’s typing pattern.
So, as hinted, this blogpost is slightly different from the others. It’s a joint experiment with me and some coworkers, Rafael and Davide, to test the TypingDNA API, and find answers for the big question: “Do we really have distinct enough typing patterns to use as authentication?”
There are two typing patterns that TypingDNA can collect.
- same text patterns: when you want to authenticate a user on exactly the same text that was used when they were enrolled. (e.g. emails, usernames, passwords, etc)
- any text patterns: when you want to authenticate a user on a different text than the one used for enrollment (e.g. emails, documents, etc)
You can get more information on typing types here.
In the TypingDNA docs, you can find some ways to test the API, and the one that we’ll be using for this blogpost is through Postman with a request collection provided by them that you can download and start using almost right out of the box.
After setting up an account, we can start experimenting with it. To get our typing patterns we can use the demo typing pattern viewer where we will type our sample texts and get an array of the flight and dwell duration combinations (typing behaviour). Below we can see the result of the same text pattern type, for “Enter some text here” typed by me.
Note that the result by the previewer tool also includes an array of values for any text pattern type, separate from the one above. It was just omitted for simplicity purposes.
To start saving these patterns and also verifying them, there are docs for all the available requests, but we’ll be using the auto endpoint. This endpoint takes care of automatically enrolling a new user and the new pattern as well as verifying a new one against the existing ones.
So let’s put our hands to work and start saving our patterns.
Same text pattern⌗
First, we’ll start by testing the same text patterns. For this, each one of us is saving our own email 3 times, and then we will write them a 4th time to verify if it matches or not (as a control for the experiment). Then, each one of us will try to verify others’ emails and check if it matches, or if it helps to confirm the premise that we have indeed a typing pattern.
Time to verify,
- true a successful match
- false an unsuccessful match
Looking at the results table, it’s clear that the control verification worked, and we weren’t able to impersonate the others with this pattern.
Any text pattern⌗
Now it’s time to check how things go with any text pattern type. In this test, each one of us is submitting the following 3 excerpts:
- To write good code is a worthy challenge and a source of civilized delight. – stolen and paraphrased from William Safire
- My favorite sandwich is peanut butter, baloney, cheddar cheese, lettuce, and mayonnaise on toasted bread with catsup on the side. – Senator Hubert Humphrey
- The sooner you fall behind, the more time you have to catch up.
After saving these 3 sentences, we then repeat the previous process, where we try to verify against our own submissions (control) and the others’. The new sentence will be:
- No amount of careful planning will ever replace dumb luck.
(Just a side note, these sentences were randomly selected using fortune command)
- true a successful match
- false an unsuccessful match
The results for any text pattern remain the same as for the same text, with the exception that Davide also matched positive for Rafael - the reason for this we believe it can be explained by the first and second paragraph of the next section. The control verifications are always true, also.
Please bear in mind that we only submitted the bare minimum of required patterns to be able to verify. In a real-world situation with more submissions, results would get a lot better. For example, every time we got a negative match (in both pattern types), the confidence was 1 (high), but for positives, the confidence was 0 (low). In the Advanced API, (we were only using the standard one), there is also a quality parameter that can help adjust for better results or better UX.
As seen above, the results were also good assuming the conditions mentioned, so we don’t see it as a negative outcome. For example, Davide’s result for any text pattern, may have matched
true for Rafael because of the few cases we submitted, therefore the model was still far from perfect. There is the chance that if more had participated in the experience, with so few submissions, results would be even more random.
Overall it was a nice experience with TypingDNA. The API is well documented and it’s very simple to start using and trying things.
Will TypingDNA replace 2FA over messages or any other conventional methods? We’re not quite sure yet, but we believe that it’s a nice alternative to have, if you wish to provide customers a different experience.
The dashboard page is also organized and clean. During the time we used it for this small test, we clearly noticed some differences and evolutions, which was nice to see. But there are still some things missing that would be cool to have; e.g. an overall view of the enrolled users - instead of having to look for them manually every time (note that everything works without this, but it helps to debug and check if things are OK without using the API).
Give the developer version a try and see for yourself. So, that’s it for today. Have a nice one and see you in the next blogpost 👋