A coalition of non -profit parents is asking for multiple Congress committees that initiates an investigation on goal to prioritize participation metrics that put children’s safety at risk.
The call is part of a three -pointed attack campaign of the American Coalition of Parents (APC), launched Thorsday. It includes a letter to legislators with calls for investigations, a new system of notification of parents to help parents stay informed about the problems that affect their children in the finish line and beyond, and mobile advertising fences at the Meta DC and California headquarters, calling Failye Company by Failing and Failyy.
The APC campaign follows an APRIL Wall Street Journal report that included an investigation that analyzed how the company’s metric approach has led to potential damage to children.
The FBI is aimed at 250 suspects in the ‘764’ network of online predators who manipulate children in violent and explicit videos
“This is not the first time that goal has been trapped by making an average technology with children that exposes them to inappropriate content,” said APC executive director Allight Marre. “Parents throughout the United States should be extremely varied from their children’s online activity, especially when it implies emerging technology such as AI Digital Companions. This pattern of bad finishing behavior shows that they cannot be reliable with the fingers of the feet, and that responsible for not prioritizing child safety.

The photo shows the works of art of the mobile billboard that are exhibited in the finish line in Menlo Park, California, and Washington, DC, as part of the attack campaign of the American coalition of parents launched against the technology company on Thursday. (Coalition of American parents)
The APRIL Wall Street Journal research not only reported on internal Conerns that the target was dodging ethical lines to make its chatbot system of AI more advanced, but also shared how the authors of the report tested the system themselves.
The reporting conversations of the reporters found that the Meta AI chatbot systems were dedicated and sometimes intensified sexual discussions, even when the chatbot knew that the user was a minor. The investigation found that the AI chatbot could also be programmed to simulate the personality of a minor while committing to the end user in a sexually explicit conversation.
In some cases, the trial conversations could make the target chatbot talk about romantic meetings in the characters of the movie Voice of Disney.
Meta launches community notes for Facebook to replace the verification of facts
In some cases, trial conversations were able to make the target chatbot talk about romantic meetings in the characters of the movie Voice of Disney, according to a recent report. (Getty Images/Meta)
“The report to which reference is made in this letter does not reflect how people experience AIS thesis, which for adolescents is valuable, such as helping with the task and learning new skills,” said a finish spokesperson, Fox Digital said in response to the campaign. “We recognize the parents of the parents about these new technologies, so we have put the additional guards of the appropriate age that allow parents to see if their adolescents have been chatting with AIS, applications do not apply. Musting and time of time. Important time and time information. Information of time and time of time.
According to the Journal reports, which puts competitions, the company made multiple internal decisions to loosen Guardrrails around its chatbots to make them as attractive as possible. According to the reports, Meta made an exemption to allow the “explicit” content within its chatbot whenever it is in the content of the romantic game.
At the same time, goal has tasks to help improve the safety of your product for minor users, such as the introduction of Instagram “teenage accounts” with incorporated security protections that came out in 2024 in the middle of the AIY’s AI scrutiny.
In April, Meta announced the expansion of these accounts to Facebook and Messenger. In these accounts, minors are prohibited from conversations about sexual explicit content with chatbots.
Click here to get the Fox News application
Goal also has parents’ supervision tools in the AI chatbot system that are supposed to show parents who are talking about their children in a regular base, including chatbot, and has tools to close accounts that exhibit sexual exploitation.
Coincids with Apc’s Campaign Attacking Meta, The Group Launched a New Website Titled “Dangersofmeta.com” With Links to Apc’s Letter To Members of Congress, images of the mobile billboards they are deploying, a link to the new “lookout” look “Lookout” look “Metarest” look “Metarest” Look “Metarest” Meta -Meta -Meta -Meta -Metarest, and and recent security.
]