A bipartisan bill is looking to end Section 230 protections for tech companies

Lawmakers from opposite sides of the aisle are seeking a sunset for Section 230 of the Communications Decency Act because it has “outlived its usefulness.” House Energy and Commerce Committee Chairwoman Cathy McMorris Rodgers and ranking member Frank Pallone, Jr. released a bipartisan project Legislation introducing a bill that seeks to repeal the provision after December 31, 2025. The Wall Street Journal, they acknowledged that Section 230 “helped transform the Internet from the days of ‘you’ve got mail’ to the global communications and commerce nexus it is today.” However, they said, big tech companies are now using the same law “to shield them from any responsibility or accountability because their platforms cause great harm to Americans, especially children.”

They added that lawmakers who have previously tried to address Section 230 issues have been unsuccessful because tech companies have refused to cooperate in any meaningful way. Their bill would force tech companies to work with government officials for 18 months to create and pass new legislation to replace the current version of Section 230. The new law will still allow free speech and innovation, while also encouraging companies. “to be good stewards of their platforms.” Rodgers and Pallone said their bill would give companies a choice between making the Internet a “safe, healthy place” and losing Section 230 protections entirely.

Section 230 protects online publishers from liability when it comes to content posted by their users. Companies like Meta and Google have used it many times before to reject claimsbut there is take strict control In recent years. Last year, a bipartisan group of senators introduced a bill amending the section to require large rigs. to pull down the content within four days when deemed illegal by the courts. Another bipartisan group also suggested “There is no Section 230 immunity for the AI ​​Act,“It seeks to hold companies like OpenAI accountable for malicious content, such as deeply fake images or audio, created to damage someone’s reputation.

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *