Kids and AI: Former White House AI chief on preventing harm


0


When Bruce Reed served in the Biden administration as the president’s deputy chief of staff, he led the effort on working with leading AI companies like Anthropic and OpenAI on voluntary comittments to ensure the safety of their products.  

Reed has since left the White House, but he’s not finished with AI, a technology he described to Mashable as “exciting, amazing, sometimes terrifying.”

He will continue working at Common Sense Media, a nonprofit organization that supports children and parents as they navigate media and technology. Popularly known for its media ratings of children’s content, including video games, TV shows, and movies, the nonprofit also conducts research and advocacy.

SEE ALSO:

Teens are talking to AI companions, whether it’s safe or not

Reed, a veteran of three Democratic presidential administrations, will lead Common Sense AI, which advocates for more comprehensive AI legislation in California. Common Sense AI has already backed two state bills that separately establish a transparency system for measuring risk of AI products to young users, and protects AI whistleblowers from retaliation when they report a “critical risk.”

Reed argues that we’re in a critical window to implement AI safeguards, particularly for minors, before certain business practices become entrenched and harder to regulate.

“When social media companies rushed to move fast and break things, and ignored kids’ privacy and safety we ended up with a youth mental health crisis,” Reed says. “Nobody wants to see that happen again.”

Parents’ concerns about AI chatbot harms

While some experts disagree that social media drove an increase in mental health conditions amongst youth, parents are already stepping forward with grave concerns about how their children are engaging with AI chatbots.

Last fall, bereaved mother Megan Garcia filed a lawsuit against Character.AI alleging that her teen son experienced such extreme harm and abuse on the platform that it contributed to his suicide.

Soon after, two mothers in Texas filed another lawsuit against Character.AI alleging that the company knowingly exposed their children to harmful and sexualized content. One teen of the plaintiff allegedly received a suggestion by its chatbot to kill his parents.

Common Sense issued its own parental guidelines on AI companions last fall, and Character.AI has since added new safety and parental control features.

California, where Common Sense Media is headquartered, is an ideal place to pass legislation that addresses some of the emerging risks of AI, Reed says. He was instrumental in drafting the state’s consumer privacy law in 2018. In the absence of a federal bill, the state legislation effectively became the national standard because so many tech companies are based in California.

Mashable Light Speed

The politics of AI safety

Reed also doesn’t seem intimidated by the shifting political calculus now that Donald Trump is back in the White House, having given the impression that AI technology companies have carte blanche to pursue “dominance.”

One of Trump’s executive orders rescinded AI safety testing rules that Biden himself put into effect. Meanwhile, the companies that might have once voluntarily worked with the Biden administration on safety commitments are now appealing to Trump for less regulation.

Despite the rhetoric and lobbying, Reed is convinced that it’s in AI tech companies’ long-term best interests to test their products, ensuring their safety prior to putting them on the market.

After all, lawsuits that force companies to reveal their inner-workings and adopt safety measures tend to create bad headlines, reduce investor confidence, and sow public distrust.

Reed is also aware of the narrative that the Biden administration intended to stifle AI innovation.

Critics in Silicon Valley, including venture capitalist Marc Andreessen, have alleged that the Biden administration wanted to take control of, or “kill,” AI. (Andreessen described a meeting with Biden officials on the topic of AI as “absolutely horrifying,” and said the alleged exchange helped convince him to endorse and financially support Trump.)

Reed participated in numerous meetings with major tech stakeholders, including Andreessen. He politely disagrees that anything of the nature described by critics occurred throughout these conversations.

“The thrum in Silicon Valley has tried to suggest that the Biden administration somehow overreached on AI, which isn’t true,” he says. “We didn’t have the regulatory authority to overreach, even if we wanted to.”

Reed instead believes the main objection from Silicon Valley tech investors like Andreessen to the Biden administration’s policies had to do with the Securities and Exchange Commission’s attempts to crack down on cryptocurrency companies. Andreessen backed some of these companies, and the Trump administration has dropped a number of the SEC’s lawsuits in recent weeks.

Pro-innovation, pro-safety

Regardless of the characterization of Biden officials as anti-AI, Reed says he supports innovation — and wants to make sure that the companies get it “right” from the beginning.

“It’s important for America to win the AI race, not China, but it’s also important for America to set the standard for AI trust, security, and safety, because China’s not going to do that.”

Reed says that an area of possible bipartisan and industry cooperation, for example, could be tackling explicit deepfakes, a technology that has ensnared teens and adolescents with devastating consequences.

The Biden White House laid out its own strategy for curbing the non-consensual imagery, and First Lady Melania Trump has backed a bill giving victims stronger protections.

Clearly, Reed says, this is not an area where American companies need to pursue dominance at all costs: “We don’t need to lead the world in deepfakes — we want to lead the world in stopping deepfakes.”

Reed says there’s no time to waste on any front, particularly as it relates to ensuring that AI products are designed with children’s privacy and safety in mind.

“We can achieve the most powerful AI and still make sure that privacy is protected and that companies are transparent about what they’re doing to make their products safe,” he says.

var facebookPixelLoaded = false;
window.addEventListener(‘load’, function(){
document.addEventListener(‘scroll’, facebookPixelScript);
document.addEventListener(‘mousemove’, facebookPixelScript);
})
function facebookPixelScript() {
if (!facebookPixelLoaded) {
facebookPixelLoaded = true;
document.removeEventListener(‘scroll’, facebookPixelScript);
document.removeEventListener(‘mousemove’, facebookPixelScript);
!function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n;
n.push=n;n.loaded=!0;n.version=’2.0′;n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)}(window,
document,’script’,’//connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘1453039084979896’);
fbq(‘track’, “PageView”);
}
}


Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *