standards

Roblox, Nevada settle over child-safety standards

Sophia D’Eramo plays on the online game platform Roblox in 2020 in Franklin, Mass. The state of Nevada and Roblox reached a settlement to better protect young gamers, the Nevada attorney general said Wednesday. File Photo by Emily Flynn/EPA

April 15 (UPI) — Nevada and the online gaming platform Roblox have reached a unique settlement that will help protect young online gamers and pour money into the state’s youth programs, the state’s attorney general said Wednesday.

“This settlement will create a safer environment for our children online,” Attorney General Aaron Ford told reporters during a press conference. “I hope that it will serve as a bellwether for how online interactive platforms allow our state’s youth to use the products.”

Nevada opened an investigation into children’s safety on the popular online game creation platform in 2024. There have been lawsuits in that state and others alleging that Roblox has failed to protect young gamers from online predators and other issues.

As part of the settlement, Roblox will spend about $10 million on non-digital youth programs in the state, plus contribute toward an online safety awareness program.

In addition, the company will start using stricter age-verification measures, which will restrict what children under certain ages can see and with whom they can communicate. These measures will include facial age-estimation technology, robust parental controls, expanded parental oversight and dedicated law enforcement support.

Roblox has also committed to using government-issued ID for age assurance as well as behavioral monitoring to identify users who may have been assigned the wrong age, Ford said during the press conference.

Roblox will also include tighter controls for parents and a ban on encrypted messaging involving minors. If a parent account isn’t linked to a child account, the latter will be limited to a restricted child mode. Adults must have a “trusted friend” label, which requires parental consent, before they can chat with those under the age of 13. The changes will also include limits on notifications during nighttime hours.

Roblox told UPI in a statement that while it disputes the claims in the complaint it is “pleased” to have reached a settlement with Ford, stating it reflects the company’s “continued commitment to fostering online health and safety for kids.”

“Roblox is proud to have worked alongside Attorney General Ford to reach this landmark agreement, which builds on our work to establish a new standard for digital safety,” Roblox Chief Safety Officer Matt Kaufman said.

“This resolution creates a blueprint for how industry and regulators can work together to protect the next generation of digital citizens.”

Roblox told UPI that the agreement helped shape several safety measures, including two new age-based accounts announced Monday: Roblox Kids for users between the ages of 5 and 8 and Roblox Select for users ages 9 to 15.

Beginning in June, the accounts will “more closely align content access, communication settings and parental controls with a user’s age,” Roblox said Monday in a statement.

Source link

KISA launches project to develop security standards for physical AI

South Korea Minister of Economy and Finance Koo Yun-cheol (L) listens to an explanation about robots during a visit to an innovative AI company in Seoul, South Korea, 27 February 2026. Photo by YONHAP / EPA

April 7 (Asia Today) — South Korea’s internet security agency has launched a project to develop security standards and industry-specific protection models for physical artificial intelligence, as concerns grow over cyberattacks that could cause real-world damage in industrial settings.

The Korea Internet & Security Agency, or KISA, said Monday it will accept bids through April 21 for the project, titled Development of Physical AI Security Standards and Industry Expansion Security Models.

The initiative comes as South Korea steps up investment in physical AI, a field that applies artificial intelligence to machines and systems operating in real-world environments. Unlike conventional cyberattacks that mainly target data, attacks on physical AI systems could trigger equipment malfunctions, production shutdowns and other physical damage, officials said.

A KISA official said the agency is seeking to create practical security guidelines that companies can use during product development and operation to help prevent such risks.

The project is scheduled to run through mid-December after a contractor is selected. Its goal is to establish common security standards for what officials described as a Korean model for physical AI and to build a foundation for responding to emerging threats.

KISA said it plans to review domestic and international legal and regulatory trends related to AI security and convene a working group of experts from industry, academia and research institutes to identify technical and policy requirements.

The agency also plans to develop integrated security models that address both advanced AI threats and potential physical harm.

In addition to common standards, KISA said it will create customized security models for major industries including manufacturing, healthcare and mobility, based on field surveys and interviews with experts.

The agency said it plans to produce shared security guidelines along with five industry-specific standards and practical manuals that companies can apply across the planning, design and operation stages of physical AI products and services.

KISA said it expects the effort to strengthen the global competitiveness of South Korean companies while helping ensure that physical AI-based services are safer and more reliable for the public.

— Reported by Asia Today; translated by UPI

© Asia Today. Unauthorized reproduction or redistribution prohibited.

Original Korean report: https://www.asiatoday.co.kr/kn/view.php?key=20260407010002121

Source link