PP TUNAS Takes Effect, Child TikTok And YouTube Accounts Face Deletion

Indonesia will begin enforcing a new child-protection rule in digital spaces on March 28, 2026, and the change is expected to affect how children use major platforms such as TikTok and YouTube. Under the PP TUNAS framework, accounts belonging to users under 16 on high-risk apps can be deactivated if providers do not meet the new age-verification requirements.

The policy marks one of the strongest steps yet by the Indonesian government to reduce children’s exposure to harmful content, privacy risks, and online exploitation. The Ministry of Communication and Digital Affairs, known as Komdigi, said the rule is part of a broader push to make the country’s digital ecosystem safer for minors.

What PP TUNAS means for child accounts

PP TUNAS is the government’s child-protection policy for online platforms, and its technical implementation will start through Ministerial Regulation No. 9 of 2026. That regulation follows Government Regulation No. 17 of 2025, which serves as the main legal basis for the new digital child safety framework.

Komdigi said the policy targets users under 16 years old on applications considered high risk. In practice, this means platforms must tighten age checks and review accounts that may belong to children or early teens.

The government’s main message is clear: digital platforms must do more to protect children before they gain full access to risky services. This shift places more responsibility on platform operators, not just parents and schools.

Platforms directly affected

According to the reference data, eight major apps are in the first wave of enforcement. These services include some of the most widely used social media and entertainment platforms in Indonesia.

  1. YouTube
  2. TikTok
  3. Facebook
  4. Threads
  5. Instagram
  6. X, formerly Twitter
  7. Bigo Live
  8. Roblox

These platforms were selected because they are seen as having a higher potential impact on children’s behavior, privacy, and exposure to unsafe content. The government has not framed the move as a ban on children’s use of the internet, but as a stricter control system for platforms that carry greater risk.

Why the government is acting now

Komdigi describes child protection in the digital world as a national priority. Officials say online spaces have become more complex, and children now face risks that include manipulative content, unwanted contact, and exposure to material that may affect mental health.

Meutya Hafid, the Minister of Communication and Digital Affairs, has stressed that the state must be present to protect children online. Her office sees the new rule as a long-term investment in the country’s future, especially as children spend more time on mobile apps, social video platforms, and gaming environments.

The ministry also wants online identity protection to become part of the standard operating model for digital services. That includes stronger verification, better account monitoring, and faster action when a user is found to be below the minimum age.

How the account deletion and deactivation process may work

Under the new policy, platforms and electronic system operators, or PSEs, must verify user age more strictly. If an account is identified as belonging to a child under 16 on a high-risk platform, it may be disabled or removed depending on the platform’s enforcement process and the applicable safety rules.

The government’s immediate goal is to reduce children’s access to environments where harmful content and unsafe interactions can spread quickly. This also helps limit the collection of personal data from minors, which has become a growing concern across the global tech industry.

In simple terms, the rule pushes platforms to know who their young users are. If they cannot verify age properly, they may need to restrict access or shut down accounts that do not meet the minimum age standard.

What parents and guardians should watch

Parents should not wait until their child loses access to an account without warning. The new policy may create changes in login access, account recovery, and verification requests across major apps.

Here are the main steps families should prepare for:

  1. Check the age listed on each child’s account.
  2. Review the privacy settings on social media and gaming apps.
  3. Save important photos, chats, or school-related files before the rule takes effect.
  4. Monitor any verification notices from the platform.
  5. Talk to children about why the rule exists and what it means for online safety.

This process may feel disruptive at first, especially for families that use these apps for communication, school communities, or entertainment. Still, Komdigi argues the inconvenience is worth it if it reduces the long-term risks for children.

Why TikTok and YouTube are central to the debate

TikTok and YouTube sit at the center of the discussion because they are among the most used video platforms by children and teenagers. Their recommendation systems can show fast-moving, highly engaging content that is not always age-appropriate.

That makes them a major focus for regulators who want to reduce algorithm-driven exposure to harmful trends, misleading content, or addictive screen behavior. Although both platforms already have community rules and youth safety tools, Indonesia’s new policy adds a government-backed compliance layer.

The same logic applies to Meta-owned services such as Facebook, Instagram, and Threads, where children can be exposed to direct messaging, public interaction, and personalized recommendation feeds. Roblox and Bigo Live also fall under scrutiny because they combine social features, live interaction, and user-generated content.

Indonesia’s position in global digital regulation

Komdigi says Indonesia is becoming one of the first non-Western countries to adopt such a strict approach to child protection in digital platforms. That matters because many governments still rely on broad guidance rather than direct enforcement on age-based access.

The step may influence how other countries in the region think about child safety online. It also signals that digital policy in Southeast Asia is moving beyond content moderation and into platform accountability.

For tech companies, the message is also commercial and operational. Age verification, account suspension, and compliance reporting can now carry legal and reputational consequences in one of the world’s largest internet markets.

What happens next for platforms and users

The first challenge is technical readiness. Platforms must update their systems to identify younger users more accurately while avoiding mistakes that could lock out legitimate adult users.

The second challenge is trust. Families will want clear explanations, while companies will need to show that their processes are fair, transparent, and consistent with privacy protection principles.

The third challenge is enforcement. If a platform fails to implement the rule properly, it could face pressure from regulators as the government begins to test how seriously global apps will comply with Indonesia’s new child-safety standard.

For now, the key date is March 28, 2026. From that point forward, children under 16 on high-risk apps such as TikTok and YouTube may face account removal or deactivation if the platforms cannot confirm their age under the new PP TUNAS requirements.

Related News

Back to top button