iOS 18.2 has a child safety feature that can blur nude content and report it to Apple


In iOS 18.2Apple does adds a new feature that revives some of the intentions behind its discontinuation CSAM scan plans — this time, without breaking end-to-end encryption or securing government backdoors. An Australian-first, the company’s Communication Security extension uses on-device machine learning to detect and obfuscate pornographic content, adding warnings and requiring confirmation before users proceed. If the child is under 13, they cannot continue without entering the device’s Screen Time password.

If the device’s on-board machine learning detects nude content, the feature automatically blurs the photo or video, displays a warning that the content may be sensitive, and suggests ways to get help. Options include leaving a chat or group thread, blocking a person, and accessing online safety resources.

The feature also displays a message reminding the child that it is wrong to not view the content or leave the conversation. There is also an option to send a message to a parent or guardian. If the child is 13 or older, they can still confirm that they want to continue after receiving these warnings – by repeating reminders that it’s okay to opt out and that extra help is available. according to The Guardianit also includes an option to report images and videos to Apple.

Two screens showing the new iPhone child safety feature.Two screens showing the new iPhone child safety feature.

apple

This feature analyzes photos and videos in Messages, AirDrop, Contact Posters (in the Phone or Contacts app), and FaceTime video messages on iPhone and iPad. In addition, if a child chooses a photo or video to share with them, it will scan “some third-party apps.”

Supported apps are slightly different on other devices. On the Mac, it scans messages and some third-party apps if users choose content to share through them. Messages on Apple Watch, Communication posters and FaceTime video messages. Finally, move on Vision Prothe feature scans Messages, AirDrop, and some third-party apps (under the same conditions mentioned above).

The feature requires iOS 18, iPadOS 18, macOS Sequoia, or visionOS 2.

The Guardian reports that Apple plans to expand it globally after the Australian trial. The company likely chose Down Under for a reason: The country is gearing up for new manufacturing rules it takes Big Tech to handle child abuse and terrorist content. As part of the new rules, Australia has agreed to drop the requirement to break end-to-end encryption and breach security, adding a clause that it is mandated only “where technically feasible”. Companies must comply by the end of the year.

User privacy and security were at the heart of the controversy surrounding Apple’s infamous attempt to police CSAM. It prepared to adopt such a system in 2021 scan online sexual abuse imageswhich will then be sent to human reviewers. (This came as a shock after Apple’s history of standing up to the FBI trying to unlock a terrorist’s iPhone.) Privacy and security experts argued that this feature would open a backdoor for authoritarian regimes to spy on their own citizens in situations where there is no exploit material. Next year, Apple declined the feature(indirectly) leading to the more balanced child safety feature announced today.

Once it’s rolled out globally, you can enable the feature below Settings > Screen Time > Communication Security, and activate the option. That section has been enabled by default since iOS 17.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *