Experts raise alarm over potential harvesting of behavioral biometrics by TikTok

July 23, 2020
As government officials threaten federal action, businesses are already taking note

The  video-sharing social media app TikTok, which is owned by Beijing-based ByteDance, has recently come under fire by officials in the Trump administration over fears that the app could serve as gateway for the Chinese government to steal the personal information of American citizens. Both Secretary of State of Mike Pompeo and White House adviser Peter Navarro have hinted that the administration is likely to take some kind of action against the company – perhaps even banning the app outright – at some point in the near future.

However, some believe that the dangers presented by TikTok extend well beyond that of information theft and that the app could even be used to harvest physical and behavioral biometrics of users. Earlier this year, four teens in Illinois filed a lawsuit against TikTok and its parent company, alleging that the app violated that state’s biometric privacy laws for collecting data without asking permission. More recently, it has been alleged that the app is even collecting behavioral biometrics, which could have wide reaching security implications for users and organizations alike.

Behavioral biometrics include things such as gait recognition, speech recognition and keystroke dynamics (the unique ways people type), which are just as unique to individuals as their face or iris. Anita Nikolich, a professor at the Illinois Institute of Technology, recently told the Chicago Tribune that the data being collecting by TikTok, such as how users hold their phone, is “above and beyond what other social media platforms collect.”

Dawud Gordon, Ph.D., the CEO and Co-Founder of TWOSENSE.AI, which uses behavioral biometrics of users to streamline the authentication process, says that if an app like TikTok was indeed going through the effort to extract behavioral biometric data on its users, he would be hard-pressed to believe that it would be for anything other than trying to penetrate some other piece of security that prevented access. 

“A behavioral biometric is not something, in itself, that you can just collect on a phone. It’s not like location where you can just ask a sensor, ‘give me the location,’ and now you have the location,” he says.  “Collecting data on how a user holds their device and if they are right-handed or left-handed, that is much more complex than something as simple as location. That involves collecting basic behavioral information, like where they are tapping on the screen, the motion of the device and then a fairly complex mathematical system on top of that that observes that information over time and then essentially extracts the value as predicted. That’s a very sophisticated system that takes a significant amount of effort to build and it really only makes sense to build that to either profile a user and be able to identify them elsewhere or to be able to authenticate them, which are two different things.”   

According to Gordon, these behavioral biometrics could be theoretically used to attack another system that is protected by such a biometric solution. However, Gordon said that to his knowledge, there has never been an example of an attack where someone mimicked the behavior of the authorized user.

“As behavioral biometrics begin to become more prevalent, this could potentially start the behavioral biometrics arms race that I think one day will become part of any normal security attack force; where a system will essentially recognize the behavior of the authorized user and then be able to differentiate anybody else who is using those credentials from that user,” Gordon adds. “I think we are very quickly coming to a time where that is the case and somebody will need much more than just a credential in order to have access; they will need to be acting the way the authorized user does.” 

Gordon says that that the collection of physical biometric characteristics of users should also be concerning as a nefarious actor could combine both physical and behavioral biometrics to thwart various security safeguards.

“It would be almost child’s play with some of the generative algorithms that we have in machine learning now to take these streams of audio recordings and generate somebody saying a specific passphrase, something like ‘hey, Alexa, do this, that or the other,’ that sounds exactly like them that would fool those sort of weak voice biometrics that are protecting some of those systems,” he says.

Organizations Act

Gordon says one of the things that really got the attention of a number of businesses and government agencies about the potential dangers of TikTok was the revelation that the app was found to be snooping on the clipboard on people’s phones, which would have given them access to any passwords or other sensitive information that users had copied and pasted.

“Them collecting that and sending it back to their server was something that concerned companies specifically because a lot of times, in one form or another, on mobile it can be unwieldy to deal with passwords,” Gordon explains. “Also, in the security space, there are a lot of challenges around password reuse, meaning that it is difficult to remember passwords so oftentimes people will reuse a single password in multiple places. So, even if it was someone copying a password for a personal sale on a retail website, that could potentially be the same password that user was using for a work account and that was a way for employee credentials to leak out, which is one thing many companies are sensitive about.” 

As a result of this and other concerns, many organizations like Wells Fargo, as well as the TSA and U.S. military, have banned employees from having the app on their phone. Amazon also recently sent an email to its employees asking them to delete TikTok as well but the company later backtracked and said the message was sent in error.

“I think the primary threat vector organizations are worried about when it comes to TikTok is that their employees are going to be spear-phished with very targeted messages and that having an application on an employee’s device that tells them a terrible amount about this user and their behavior,” Gordon says.  “For example, it would be very easy to figure out where a user banks based on the location information. Therefore, it becomes easier to craft a message at this user that would get them to believe that it is a legitimate message and they give up credentials and access.”

Joel Griffin is the Editor-in-Chief of SecurityInfoWatch.com and a veteran security journalist. You can reach him at [email protected].  

About the Author

Joel Griffin | Editor-in-Chief, SecurityInfoWatch.com

Joel Griffin is the Editor-in-Chief of SecurityInfoWatch.com, a business-to-business news website published by Endeavor Business Media that covers all aspects of the physical security industry. Joel has covered the security industry since May 2008 when he first joined the site as assistant editor. Prior to SecurityInfoWatch, Joel worked as a staff reporter for two years at the Newton Citizen, a daily newspaper located in the suburban Atlanta city of Covington, Ga.