Philadelphia Sheriff's Campaign Removes AI-Generated Stories Amid Misinformation Concerns

Philadelphia Sheriff's Campaign Removes AI-Generated Stories Amid Misinformation Concerns

The campaign team supporting Philadelphia's Sheriff Rochelle Bilal acknowledged on Monday that numerous positive "news" stories on their website were authored by ChatGPT, a generative AI chatbot.

Following a Philadelphia Inquirer report revealing the absence of these stories in local news archives, Sheriff Bilal's campaign took down over 30 articles generated by a consultant utilizing the AI technology.

While the campaign asserted that the stories were based on real events, they admitted to providing talking points to an outside consultant, which were then utilized by the AI service. The campaign clarified that the artificial intelligence service generated articles to bolster initiatives prompted by the AI, terming them "fake news articles."

Large language models like OpenAI's ChatGPT, while proficient at quickly completing prompts, are prone to errors known as hallucinations due to their predictive nature.

The use of such AI tools in writing work emails and other documents has become increasingly common but can pose risks if accuracy and fact-checking are not prioritized.

Mike Nellis, founder of the AI campaign tool Quiller, criticized the consultant's use of AI as "completely irresponsible" and "unethical," emphasizing the need for responsible use of such technology.

While OpenAI prohibits the sharing of output from its products for deceptive purposes, it's also forbidden to use its systems for political campaigning or lobbying.

Despite bipartisan discussions in Congress about regulating AI tools in politics, no federal law has been enacted yet.

The controversy surrounding the AI-generated stories underscores concerns about misinformation potentially influencing voters and undermining democracy, as highlighted by critics like Brett Mandel, a former finance chief in Bilal's office.

Mandel, who raised concerns about office finances and filed a whistleblower suit against the office, expressed worries about the erosion of trust in institutions and truth in both local and national contexts.

The list of fabricated news stories, attributed to reputable outlets like the Inquirer and local broadcast stations, raises further questions about the dissemination of misinformation in the public sphere.

In response to the scrutiny, the Bilal campaign's website included a disclaimer, stating that the information provided "makes no representations or warranties of any kind" regarding its accuracy.

Concerns persist about the potential impact of such misinformation on public perception and democratic processes, underscoring the need for transparency and accountability in political communications.

This story has been updated to clarify OpenAI's policy regarding the use of ChatGPT for misleading purposes.