Socialpost

Complete News World

A bill to remove legal protections for AI projects is being introduced in the US

A bill to remove legal protections for AI projects is being introduced in the US

US Senators Richard Blumenthal (Democrat) and Josh Hawley (Republican) introduced a bill in the US Senate last week. According to this, AI companies should lose the special protections that US law gives to online platforms. reported Cointelegraph.

Title 47, Section 230 of the Communications Decency Act of 1996 protects online service providers, including social networks, from liability for content posted by their users. They cannot be sued for illegal content unless they make a good faith effort to remove such content once it is discovered.

Social media escapes responsibility

Opponents of the law say it would allow social networks and other online services to avoid liability for content posted on their sites. Recently, the US Supreme Court ruled against this amendment in a case. The plaintiff alleges that social media is liable for damages for promoting and hosting terrorism-related content.

The Supreme Court ruled that social media cannot be held liable for recommendations its mechanisms make about content. Similarly, a telecommunications company or email provider cannot be held responsible for content sent using its services.

Does the law even apply to artificial intelligence?

At this point, it’s unclear whether the law in question even applies to the AI ​​projects of Google, OpenAI, and other companies in general. According to Sam Altman, CEO of OpenAI, Section 230 does not apply to his company. He said this in a recent hearing in the US Senate.

Future debates about Section 230’s relevance to AI-generated content may focus on the definition of the term “online service,” as the statutory text does not expressly address it. For example, GPT’s API interface made it possible to launch a myriad of AI services, including blockchain and cryptocurrencies. When the law applies to AI it may be difficult to hold individuals or companies liable for harm caused by ill-advised or misleading information generated by AI.

See also  Domestic battery production: US increases funding