Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Meta Launches Proactive Parental Alerts When Teens Search for Suicidal Quotes and Self-Harm Content
Meta is taking a significant step toward protecting vulnerable teenagers by introducing a new alert system for parents. Starting in March across the UK, US, Australia, and Canada, the social media giant will notify parents when their teens repeatedly search for suicidal quotes, self-harm guidance, or related harmful content on Instagram. This represents a major shift from Meta’s previous approach of simply blocking or hiding such material—now the company is actively warning guardians about their children’s search patterns.
How Instagram’s New Parental Notification System Works
The alerts will reach parents through multiple channels: email, text messages, WhatsApp, or directly in-app notifications. Parents who have enrolled in Instagram’s Teen Accounts supervision tools will be the first to receive these warnings. Meta’s system is designed to catch two distinct scenarios: when a teen performs multiple searches for suicide-related content within a short period, or when the platform’s algorithm detects an abrupt change in search behavior pointing toward harmful topics.
The company emphasizes it will “err on the side of caution,” which means some notifications may trigger even when there’s no immediate red flag. To help parents navigate these delicate situations, Meta is including expert-backed resources with each alert, equipping families with conversation strategies and mental health support options.
Suicide Prevention Advocates Raise Serious Concerns
Despite Meta’s good intentions, suicide prevention organizations have sharply criticized this initiative. They worry that the alert system could backfire by leaving parents “panicked and ill-prepared,” potentially doing more harm than good. Critics argue that notifying parents about suicidal quotes and self-harm searches without proper context or support infrastructure could trigger harmful overreactions. Many advocates believe Meta’s efforts would be better spent preventing harmful content from appearing in users’ feeds in the first place, rather than documenting search activity after the fact.
Meta Defends Its Safety Approach
In response to the backlash, Meta has defended its position by pointing to its existing protections. The platform blocks searches for harmful terms, deprioritizes suicide and self-harm content in recommendations, and automatically redirects users to mental health support resources. Meta also disputes claims that it deliberately promotes suicidal quotes and harmful material to vulnerable teens, asserting instead that these posts are actively suppressed. The company reiterates its commitment to developing more robust safety tools and addressing the root causes of harmful content spread.
The Broader Picture: Global Regulatory Pressure on Teen Social Media
This announcement arrives amid intense global scrutiny of how social media platforms affect young users. Australia has implemented an outright ban on social media access for users under 16, while UK, France, and Spain are all evaluating stricter regulations to protect adolescents. In the United States, courts are investigating whether Meta deliberately targeted younger demographics. These regulatory moves suggest Meta’s new alert system may be partly a proactive response to potential legal and legislative challenges.
What Success Will Depend On
The real test for Meta’s new notification system will be its implementation quality. Success hinges on three critical factors: how accurately the algorithm detects genuine risks versus false alarms, whether parents receive adequate guidance on how to respond responsibly, and whether follow-up mental health resources are readily accessible. If executed poorly, the alerts could become just another notification that parents ignore. If done well, they might genuinely help families recognize and address concerning behavioral patterns before they escalate. The coming months will reveal whether this approach lives up to its promise or confirms the critics’ fears.