How will age verification work in Australia's new social media ban?
Last Minute Changes
Australia's Online Safety Amendment (Social Media Minimum Age) Bill passed the Senate today, starting a one year clock on social media companies to perform age verification checks on their users or face fines of up to 50 million Australian dollars ($33 million US dollars). The bill is the first of its kind in the social media space, but online age verification requirements for viewing pornography have proliferated rapidly in recent years. Such bills have been passed by 19 US states over the last two years while similar bills like S-210 and C-412 are on track to pass eventually in Canada. These legislative shifts have made age verification an emerging industry and companies in the space have been actively involved in lobbying for these laws around the world.
What makes Australia's ban novel isn't that it requires age verification, but rather that it heavily restricts the mechanisms by which it can be performed. A common criticism of laws of this nature is that collecting ID from users carries significant privacy risks and potential for misuse and government surveillance. This was a sticking point for the conservative Liberal and National party members in the Senate, and an amendment was added literally yesterday in order to get it to pass. In the words of Senator Kovavic of the Liberal Party:
The coalition has worked to ensure that this bill includes critical privacy protections to ensure that no platform can force users to provide sensitive personal information, such as digital IDs, drivers licenses, or passports. This is not about surveillance. It's about protecting our children in a world that is increasingly digital.
The relevant excerpt from the amendment she's referencing is:
63DB Use of certain identification material and services
(1) A provider of an age-restricted social media platform must not:
- (a) collect government-issued identification material; or
- (b) use an accredited service (within the meaning of the Digital ID Act 2024);
for the purpose of complying with section 63D, or for purposes that include the purpose of complying with section 63D.
Civil penalty: 30,000 penalty units.
(2) Subsection (1) does not apply if:
- (a) the provider provides alternative means (not involving the material and services mentioned in paragraphs (1)(a) and (b)) for an individual to assure the provider that the individual is not an age-restricted user; and
- (b) those means are reasonable in the circumstances.
Note: In proceedings for a civil penalty order against a person for a contravention of subsection (1), the person bears an evidential burden in relation to the matter in this subsection (see section 96 of the Regulatory Powers (Standard Provisions) Act 2014).
(3) This section does not limit section 63DA.
(4) In this section:
government-issued identification material includes:
- (a) identification documents issued by the Commonwealth, a State or a Territory, or by an authority or agency of the Commonwealth, a State or a Territory (including copies of such documents); and
- (b) a digital ID (within the meaning of the Digital ID Act 2024) issued by the Commonwealth, a State or a Territory, or by an authority or agency of the Commonwealth, a State or a Territory.
This has been widely interpreted as forbidding companies from collecting any form of government ID, including digital ID (AGDIS), excluding the mechanisms typically used for age verification to comply with pornography bans in the US. So how do you verify users' ages without checking their ID while preserving their privacy? That's the 50 million Australian dollar question.
Roadmap for Age Verification
Item 5 of the bill makes it clear that it will be the responsibility of the eSafety Commissioner "to formulate, in writing, guidelines for the taking of reasonable steps to prevent age-restricted users having accounts with age-restricted social media platforms." On November 15th, a consortium headed by Age Check Certification Scheme (ACCS) was awarded the tender for the Australian Government’s Age Assurance Technology Trial which is now in progress. The trial will consist of a series of tests to evaluate "the maturity, effectiveness, and readiness for use of available age assurance technologies that determine whether a user is 18 years of age or over," and the results will inform the guidelines laid out by the eSafety Commissioner. Results are expected in roughly six months, giving companies only another six months to implement age verification technologies before penalties are levied.
The age assurance trial has its roots in the Roadmap for Age Verification report that eSafety submitted to the Australian Government in March 2023. In preparation for the report, eSafety commissioned Enex Testlab to carry out an independent assessment of age assurance technologies available on the market. The report outlines these technologies, and likely gives us some pretty strong hints of what is currently being evaluated in the technology trial. Interestingly, the Government response to the Roadmap for Age Verification asserted that it was essential that age verification technologies "work reliably without circumvention" and "balance privacy and security, without introducing risks to the personal information of adults." They went on to conclude:
Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature.
The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken.
Apparently, they have since changed their minds.
The report primarily focuses on three different mechanisms for age verification:
-
Government-Issued ID - Identity verification based off of physical or digital IDs was found to be the most reliable mechanism, and the one in most widespread use. Concerns were expressed about accessibility and how it could impact certain groups that might be less likely to have government-issued ID. The privacy and security risks were also mentioned, although with the telling conclusion that "the use of trusted and accredited third-party providers with strong privacy and security practices may mitigate these risks."
-
Facial Biometrics - Machine-learning model predictions of a person's age from video of their face were found to be "the most viable and privacy-preserving within the biometrics category." The primary concerns here were focused around inconsistent model performance for "some skin tones, genders, or those with physical differences." The privacy risks around collecting sensitive biometric data were also raised.
-
Voice Biometrics - Models based on recordings of a person's voice were found to be "less mature," the least accurate of the methods tested. There were again inclusion concerns, this time focused around inconsistent performance based on different accents, low language fluency, or disability.
The report also outlines two mechanisms for addressing privacy concerns:
-
Electronic Tokens - The euCONSENT pilot was used as the model here, and it was praised for the use of a "tokenised, interoperable, and double-blind approach to preserve user anonymity" to ensure "age-restricted websites do not know the identity of a user, and the age assurance service provider does not record which sites a user visits." In this scheme, an age verification service can issue a token attesting to the fact that someone meets a minimum age requirement. The token is then stored in a digital wallet and can be reused for some period of time. This has the major benefit that it prevents the sites requiring age verification from connecting users to their real identities, but also the significant downside that sensitive personal information needs to be shared with a third party.
No matter how carefully a third party company tries to handle sensitive data and prevent misuse, there is always the risk of outside attackers gaining access to. Countless high profile hacks like Equifax to Snowflake have taught us that lesson.
-
Zero-Knowledge Proofs - Zero-Knowledge Proofs (ZKP) are a form of applied cryptography where proofs of arbitrary computations are created which preserve the secrecy of some of the inputs to the computation. These proofs can then be easily be shared and verified by others while providing strong cryptographic guarantees that the original secret inputs cannot be recovered from the proof. The report primarily focuses on an open-source demonstration by the French data regulator, Commission Nationale Informatique & Libertés (CNIL), but significant advances in this space have been made since the report was written. For example, the openpassport project allows users to scan the NFCs in government issued IDs and generate proofs of both their validity and arbitrary attestations about the data in the ID without revealing anything beyond that. A user could generate a proof that only says they are older than sixteen and nothing else, or only verify their country of citizenship.
Significant advances have also been made in the Zero-Knowledge Machine Learning (ZKML) space which allows one to generate proofs that a given machine-learning model produced a certain output without revealing the inputs. This could be applied to facial biometric models to provide an age prediction without sharing the input photo. As cameras supporting cryptographic signatures have become more common in the era of deep fakes, it's even possible to verify those signatures and confirm that a photo was recently taken.
Another area that has seen a lot of activity recently is the use of ZKPs to verify TLS or DKIM signatures on websites and emails, respectively. These can be used to piggy back off the existing trust structure around domain certificates and generate attestations from trusted sources. For example, you could generate an age verification proof based on processing your logged in DMV profile. The possibilities are really endless since arbitrary computations can be verified. Proofs can be verified recursively, allowing you to combine multiple proofs from different sources together.
The ZKP approach carries all of the privacy benefits of the token system with the added benefits of transparency and that you don't need to trust a third-party with your sensitive data. If you're interested in learning more about Zero-Knowledge Proofs, that's what we do here at Sindri. Feel free to check out our documentation or shoot us an email to learn more about our solutions.
Hot Takes
While Zero-Knowledge Proofs offer the best security and privacy guarantees for users, eSafety is far more likely to lean towards existing out of the box solutions from third-party providers such as Yoti as a matter of practicality. They'll likely set guidelines requiring the use of these third-party age verification services with an euCONSENT-style token-based approach to limit the amount of information that is shared with social media providers. Their report made it clear that they accessibility and inclusiveness are two of their top concerns. That point towards requiring multiple age verification options because they identified weakness in each individual method. The amendment restricting the use of government issued IDs is also very clear that it does not apply given that "the provider provides alternative means for an individual to assure the provider that the individual is not an age-restricted user," so adding a second verification alternative will allow the collection of government IDs. The only two options that weren't deemed entirely ineffective were verifying IDs and facial biometrics, so that's likely what we'll get.