I’m asking for public policy ideas here. A lot of countries are enacting age verification now. But of course this is a privacy nightmare and is ripe for abuse. At the same time though, I also understand why people are concerned with how kids are using social media. These products are designed to be addictive and are known to cause body image issues and so forth. So what’s the middle ground? How can we protect kids from the harms of social media in a way that respects everyone’s privacy?
Parental controls have been an effective way for decades. In combination with actually looking over your kids, of course.
yeah, but that would require, you know, parenting, which is something we can’t do.
Unfortunately a lot of parental controls aren’t that helpful, and they’re more of an afterthought
I agree with parenting in general though
The existing tools also extend poorly to cover adults with developmental disabilities who need a digital shepherd to make sure they’re using the web safely. There’s no substitute to being involved. Also, we should bring back the family computer. My parents had a computer in the public area of the house since I was in elementary school. Even in the age of laptops, we had a shared desktop.
ban social media metrics and information trading/markets. make it a truly anonymous service like it was in the early 2000s.
if protecting children was the point they would stop corporations from identifying all users and selling their identities/profiles online.
but, protecting the children is NOT the point. the point is control of freedom of speech, or rather who gets to have the freedom of speech.
Most people don’t want social media to be anonymous. They want to be themselves and connect with real people. How exactly is an anonymous tinder supposed to work?
Glory holes
the great thing about the early internet was that you had the choice to expose yourself.
now you don’t even need to be a member to be exposed.
And banning non anonymous social media is supposed to bring the choice back?
ban social media metrics and information trading/markets.
try reading that again.
“make it a truly anonymous service”
Reading what you said?
Stop. Giving. Them. Phones.
Stop whining. No they don’t need one. NO THEY DON’T.
No.
No they’re not special.
No they’re not too busy. Neither are you.
No iPad either.
Stop. Shut up. No. Phones.
I agree, if you limit “phones” to “smart phones and portable computers”. There are reasons to give a kid a small, no internet dumbphone. But yes, don’t give kids unrestricted access to the family PC, and DEFINITELY dont give them their own.
And or old school phones, that can call and text, but not surf the internet. Old smaller flip phones. Because parents will want to be able to communicate because they are worriers in many cases, there is no need for them to use smartphones for this.
That’s the tack I’m taking. My eldest goes to high school next year and most of his peers are automatically getting a smartphone at that point. He’ll be 13. He can forget it. A dumb phone at a push, for safety. That’s it.
By not allowing parents to outsource the responsibilities of being a parent.
I’ll reply to this random one with that statement. There’s no winning move as a parent.
Problem is being locked out. If your kid is the only one not on social media and all other kids are, your kid will be socially left out.
All kids are on a chat platform you don’t support. What do you? Disallow it and give them a social handicap that might scar them, or allow it and take the risk?
The same goes for allowing images on other platforms. Since GDPR schools seem to care. Yet if it’s a recording that will be put on social media you can explain your 4 year old why they weren’t allowed to participate… It sucks.
I don’t know what the right way forward is. I don’t think this is it. Something is needed though. We should at least signal what we find acceptable as a society. Bog stupid rules which are trivial to circumvent might be good enough, or perhaps some add campaigns like we did with smoking (hehe, if it’s for something we support then adds are good?).
Regardless, the current situation clearly doesn’t work. It would be great if we could find and promote the least invasive solutions.
I feel that communicating your concerns with other parents and their school can help. I feel it can make sense to have some forms of socialization when they are in middle school or high school, but even then you’d want a pretty locked down system, imo.
I feel that not every parent is going to let their kids use technologically to talk to their friends, especially not all the time. That’s not how I grew up and I was fine developmentally speaking. As a parent you can seek out other parents that live by similar philosophy locally for your kids to have as friends as well.
You’d be surprised with what parents let their kids do. My little anecdotal sample size contains mostly highly educated people but most of them don’t place any restrictions on screen time of their kids. They claim they talked to their kids and they have assured them they don’t look at anything they are not supposed to but that’s just not what happens in reality.
What really happens is that the kids with no restrictions will engage with all the predatory bullshit on these platforms, nonstop. I can see this with my own eyes and my kid brings their friends over.
Communication is key but unfortunately the business model of these platforms is based on addiction and children are not equiped to deal with it and parental controls are an essential component.
I believe the parent post is nicely sketching out what a “best” move is. I have seen no better approach myself. At the same time I see what you see. The best approach isn’t all that great. If you’re lucky and find the right people it could work. There’s a lot of luck involved there.
That’s why I do think there should be some regulations indicating what is tolerated. It seems to me parent poster may agree (and thus also woth your take).
Since GDPR you can tell the school you don’t want pictures on platforms you disagree with. You may miss out on seeing the photo’s, you might come across as crazy, but you can (and you should). We were given a choice at the cost of extra paperwork and some limitations.
Even without the addiction problem of these platforms we should nurture and find a good society around us. It’s a valid take to try and find likeminded people.
I don’t think that’s the end of it. Given the state we’re in, the network effect, and the fragile ego of developing kids, I suppose we need a stronger push.
AI enforced age verification or logins which allow you to be followed anywhere is not the solution in my current opinion, it’s a different problem. The problems are the addictive and steering nature of the platforms which seems to be hard to describe in a clear way legally.
I wonder how “these platforms” should be defined and what minimum set of limitations would give us and the children the necessary breathing space.
the minimum would be transparency for the algorithm. If users can see exactly what a social media algorithm is doing with their content feed, they would always have a way to identify and escape dark patterns of addiction.
But this minimum itself would require powers to compel tech companies to give up what they would describe as intellectual property. Which would probably require a digital bill of rights?
The most practical option would be to just ask your kids directly about the kinds of content they’ve been consuming and why. Dinner table conversations can probably reveal those dark patterns just as well
Wholeheartedly agree that the problem is the addictive and predatory nature of these platforms. I don’t see how that would change under the current perpetual growth economy we all live under
I like to think I’m a tech savvy parent and the amount of tooth gnashing to setup and maintain child accounts is incredible. I’m convinced the foxes guarding the henhouse are using dark patterns to make parents give up.
Why can’t I just get a notification on my phone saying “Hey, kiddo wants to have screen time. Approve?”
Hell, I’d love a notification saying “Kiddo started watching Mr. Blah.” If I got the notification and I didn’t want them watching that, I could block the video, or creator with a click. WHY ARE WE NOT AT THIS LEVEL OF CONVENIENCE?
A LOT of these concerns would go away if phones/tablets/tv’s had these simple controls. Move those privacy controls into the home and MAKE them so easy a neanderthal could operate them.
If I have to *.newsocialbook.com into my router, you can bet your damn ass that “LiveLaughLoveMom<3” is going to keep demanding that someone else do it for her.
Capitalism. Everything you described costs money to create and maintain and it generates zero (or negative) profit. Most people aren’t going to want to pay for some sort of nanny toolkit.
Don’t get me wrong, I agree with you and it should be like that. Our current systems are not going to bring that about though.
The German passport allows services to verify age through you NFC reading your passport on your phone and confirmation of validity through intermediates state service. All they see is a confirmation of age requirement met. No name, no age, no address, no face.
Some other countries have similar systems. It’s already a EU directive to be implemented on a broader European level.
This sounds like a much better strategy than the Australian model of simply scanning your face and using AI to guess your age
How would that work online? How would they confirm it’s your passport, and that it’s a real passport that was really scanned (instead of a browser plugin)?
- https://en.wikipedia.org/wiki/EIDAS
- https://de.wikipedia.org/wiki/Personalausweis_(Deutschland)#eID-Funktion
- Register as a service, with justification why you need to be able to read the fields or properties you say you need
- Upon acceptance, aquire a digital permission certificate
- Set up a server, that handles communication with the ID
- For a request, prove you own the permission cert through a challenge sent by the ID document
- ID document proves through a challenge to the server that it is what it is (a set of produced ID documents use the same private and public keys so they are not personally identifiable / associatable to an individual)
- User enters PIN so that this process can proceed
- Open secured connection between server and ID document
- Server can request/challenge age verification, and the ID document answers with “is met”
At least the Wikipedia page is not detailed/technical on step 8, but if you were to attempt to man-in-the-middle, you could not because you can’t fake identifying as a valid ID document, which is ensured by the challenge and private/public key cryptography.
I’ll need to look into it a bit more, but I’m skeptical that this will work in practice:
How can they confirm that I’m the owner of the passport? How do you prevent them from selling the fields they requested, that have been uniquely linked to you? How do you prevent the government from keeping track of all the services you’re using?
The first factor is you physical passport, the second factor is your pin.
I don’t see how an age verification could prevent selling verified age. Once they acquire data they could theoretically sell it, illegally, if they ignore law.
The point is, you can share a small subset of fields without others. No need to share your face or passport number.
I’m not sure about whether the authority knows about the request and response at all. I previously thought so, but this description did not mention it, and it doesn’t seem technically required, if both sides can verify public key/cert validity independently, and then communicate with each other.
Kill the engagement algorithm. Your feed should contain a chronological list of posts made by people you subscribe to. In one stroke you could end the doomscroll - not just for kids, but for everybody. Also, infinite scrolling should be banned.
deleted by creator
Same as always, better parenting.
My three boys don’t have filters on the internet, instead of blocking them from the world, I raised them in it.
Governments need to setup a digital ID using a trustless authenticator.
Government issues a one-time verified credential (tied to real identity verification, like a passport or SSN check). You get a cryptographic token on your device. When a platform needs to know “is this a real adult citizen?”, you present a zero-knowledge proof — yes/no, nothing else. No name, no IP, no persistent identifier the platform can track. The government isn’t contacted. The platform learns nothing except the answer to their question.
Just normalize talking about those online irl abuse/exploitation stuff instead of yelling at em nor grounding. And stop victim blaming even some of the professionals do that.
Maybe we should do normalize about talking about other stuff too, to body images in head including “problematic” ones to in some anormal/atypical attraction types to possible self diagnosed but not so loud neurodiversities such as realizing you are might be plural or have too specific kinds of ocd.
Ive seen many abusers online are aiming kiddies online with those stuff and since there are not much help and many stigma surrounding mental health and bs kind of therapists that does victim blaming, they will have either to go online with predators watching em and prey on them for those vulnerabilities thrn thus preds will shift blame to those kids or smth.
Ive seen kids young as 12 or smth in some high risk mental health communities. You can tell someone did not wanted em but predators def do. Basically do not give birth to kids if you cant accept em in any way, if you think your kid becoming dangerous after some time, methinks you are also responsible for some aspects of it if they are under some of age.
I think we should reframe the question.
How can we protect adults from the harms of not being able to post meaningless bullshit anonymously to online anonymous strangers we never agree with without sacrificing everyones children’s mental stability?
Maybe put childrens rights before adult rights. Adults had fun and got along fine without social media back before the 2000’s. I refuse to believe that we are no longer capable of that. Especially if it means kids get to to go back to using the internet as a resource for homework and playing outside and using their own imaginations. Adults too.
Better parent supervision is the main way to combat these issues.
Companies should also either ban minors completely or allow parents to set up child accounts linked to their account with expansive parental controls that then can be migrated to full adult account once they reach legal age.
I don’t think either will happen because there are so many stupid and lazy parents in America that don’t care what their kids do as long as it’s not bothering them
Internet has replaced parenting. Kids are just another achievement after spouse and house and two cars.
Agreed 100%. Enable parents, even not tech savvy parents, to parent. Ultimately, if the parent wants their kid to do whatever, they’ll just create an adult account for their kid. Do we really want the government parenting our kids? Sure, it may be an improvement for some, but it’s a slippery slope and could lead to a Brave New World.
The book The Anxious Generation by Jonathan Haidt had a really clever idea. Create a regulation for operating systems that requires that their parental controls include an option that labels a device as belonging to a kid. When that option is toggled, requests will include some sort of header that labels the request as originating from a kid. Then, place onus (probably through some sort of legislation) on web platforms to restrict what content is shown to kids.
consider though - politicians nowadays don’t think. they think so little, in fact, that the last time i checked websites for self harm/sexual assault support or reporting were considered “too adult” for kids to have access to in the UK
if it was about kids’ safety, this wouldn’t have been omitted
Yeah, there’s no doubt in my mind that this tide of “think of the kids” is just a fascist dogwhistle (and one with a double-entendre at that).
You should read the Anxious Generation. It goes into a lot of detail on research showing the damage social media has had on an entire generation. It’s pretty undeniable that something needs to be done to stop/control social media’s influence on children and teens in their crucial development years. There are some people that are definitely using it as a cover for control, but there are plenty of well educated people that see a real problem and are trying to do the best they can to find a solution.
I mentioned it in my original comment! I thoroughly enjoyed it. As an older member of Gen Z, a lot of what’s written there jives with my lived experience and the intuitions I’ve developed around social media. And as a relatively young father, I’m also invested in figuring out how to give my kids the healthiest relationship with the online world possible.
I’m also a strong proponent of digital freedom and privacy. A lot of the age verification technology that’s being rolled is tied to companies like Palantir or organizations like DHS, which seem to have a rather unambiguous interest in neither the freedom nor the privacy (nor really the general wellbeing) of the populace.
I’m of the opinion that any system that could enable or facilitate mass surveillance is not an acceptable solution to the problem of protecting kids online.
The idea I laid out in my original comment was inspired by the idea Jonathan Haidt presents in Chapter 10 (What Governments and Tech Companies Can Do Now), Section 3 (Facilitate Age Verification), 6th paragraph:
There is not, at present, any perfect method of implementing a universal age check. There is no method that could be applied to everyone who comes to a site in a way that is perfectly reliable and raises no privacy or civil liberties objections.[26] But if we drop the need for a universal solution and restrict our focus to helping parents who want the internet to have age gates that apply to their children, then a third approach becomes possible: Parents should have a way of marking their child’s phones, tablets, and laptops as devices belonging to a minor. That mark, which could be written either into the hardware or the software, would act like a sign that tells companies with age restrictions, “This person is underage; do not admit without parental consent.”
You should listen/read Steve Gibson’s podcast episode from Security Now that goes over Zero Knowledge Proofs: https://www.grc.com/sn/sn-1034.htm
It seems like the ideal solution that can be implemented if we take the time to do it right.
Thanks for the read, I learned something today! I worry, though, that even if someone could devise a ZPK for age verification, can end-users actually trust that platforms are using it? Say for example Meta provides a biometric-based ZPK for age. Can we trust that they’re not harvesting our biometric data? In the podcast’s examples, it’s easy for Peggy and Victor to understand that they are using a ZPK system. However, the age verification problem most often arises in arrangements where the prover is using a client app into whose inner workings they have no insight (either because it’s closed source, they’re not technologically literate enough, or who has the time to scrutinize the source code for every program they use) and which is most likely developed by the verifier. So the problem kind of moves upstream: how can you trust that ZPK is actually being used?
That’s why zero trust itself is so important. The only way it can be guaranteed is to have an open standard that is zero trust, so nobody is able to abuse it and the lay person doesn’t have to trust anyone. Not to mention if it is implemented correctly, there is no data to even trust them with, given there was zero knowledge of the end user. It would require a governing body to be competent enough to implement it, but I like to dream big
Some of it can be accomplished by just setting universal demands for how social media works for all users:
- ban targeted advertising
- make it mandatory for companies to ensure algorithms don’t prioritize posts for making users angry, scared or depressed
Stuff like that. These kinds of regulations don’t involve ID checks, and could take care of a big chunk of the problem.
This doesn’t solve the problem at the core of social media. The inevitable comparison of fake lives on impressionable children/teens has been shown to cause depression, anxiety, and suicidal ideations. There is nothing that can be done algorithm or advertising wise that would stop that from happening.
So you’re suggesting just outright banning social media?
If you do the 1st one, then most companies likely wouldn’t bother with such algorithms anymore.
I dunno, they will still want people to stick around on their site, so they can see their ads.
I figure a ban of targeted advertisement would look like “The ads are only allowed to change once a day, and everybody during said day sees the same ads”. Whereas currently, each time you load a website, there’s an impromptu auction to sell the ad spots. (Advertisers don’t actually have to pay until you click their ad). So there would be less incentive to keep the user constantly engaged, as it would be enough if the user just visits regularly.
That’s interesting, and maybe better than what I had in mind.
What did you have mind?
Oh, just a ban on the targeting. The companies would still be alllowed to show as many ads, and as many different ads, as they’d want.
The ban target advertising would definitely be a more realistic solution than banning advertisements in general (which some people are advocating for here). I really am not a fan of ads and would love if they were banned, but I understand that it’s not politically realistic due to what a large role they play in our economy.
It is apparently a movement, but it gets way too little attention: https://www.politico.eu/article/targeted-advertising-tech-privacy/








