public

Judge refuses to release a man charged with planting pipe bombs on the eve of the Capitol riot

A federal judge has refused to order the pretrial release of a man charged with placing two pipe bombs near the national headquarters of the Democratic and Republican parties on the eve of a mob’s Jan. 6, 2021, attack on the U.S. Capitol.

U.S. District Judge Amir Ali ruled on Thursday that Brian J. Cole Jr. must remain in jail while awaiting trial. Ali upheld a decision by U.S. Magistrate Judge Matthew Sharbaugh, who ruled on Jan. 2 that no conditions of release can reasonably protect the public from the danger that Cole allegedly poses.

Cole, 30, pleaded not guilty to making and planting two pipe bombs outside the Republican National Committee and the Democratic National Committee headquarters in Washington, D.C., on the night of Jan. 5, 2021.

Cole, who lived with his parents in Woodbridge, Virginia, has been diagnosed with autism and obsessive-compulsive disorder. His attorneys say he has no criminal record.

Cole has remained jailed since his Dec. 4 arrest. Authorities said they used phone records and other evidence to identify him as a suspect in a crime that confounded the FBI for over four years.

Prosecutors said Cole confessed to trying to carry out “an extraordinary act of political violence.” Cole told investigators that he was unhappy with how leaders of both political parties responded to “questions” about the 2020 presidential election — and said “something just snapped,” according to prosecutors.

“While the defendant may have reached a psychological breaking point, his crimes were anything but impulsive,” they wrote. “Indeed, the defendant’s pipe bombs — and the fear and terror they instilled in the general public — were the product of weeks of premeditation and planning.”

Defense attorneys asked for Cole to be freed from jail and placed on home detention with electronic monitoring. They say a defense expert concluded that the devices found near the RNC and DNC headquarters were not viable explosive devices.

“In fact, there was no possibility of death, injury or destruction as the devices were harmless,” they wrote.

If convicted of both charges against him, Cole faces up to 10 years of imprisonment on one charge and up to 20 years of imprisonment on a second charge that also carries a five-year mandatory minimum prison sentence.

Kunzelman writes for the Associated Press.

Source link

South Korea adds 11 public institutions, delays watchdog designation

Koo Yun-cheol, South Korean finance minister and deputy prime minister for economic affairs, speaks during a meeting of economic ministers at the government complex in Seoul, South Korea, 28 January 2026. Photo by YONHAP / EPA

Jan. 29 (Asia Today) — South Korea’s Ministry of Economy and Finance on Wednesday designated 11 new public institutions, bringing the total to 342, while postponing a decision on whether to classify the Financial Supervisory Service as a public institution until next year.

The decision was made at a meeting of the Public Institution Management Committee chaired by Deputy Prime Minister and Finance Minister Koo Yoon-cheol at the Government Complex Seoul.

The newly designated institutions met statutory criteria, including receiving government support exceeding 50% of total revenue, the ministry said.

They include the Korea Customs Information Service, Gadeokdo New Airport Construction Corporation, Child Support Enforcement Agency, National Incheon Maritime Museum, Korea Sports & Leisure, Korea Statistics Promotion Agency, Spatial Information Industry Promotion Agency, Korea Water Technology Certification Agency, National Agricultural Museum, Central Social Service Agency and the National Disaster Relief Association.

The ministry said designation of the Financial Supervisory Service was deferred to prioritize substantive operational reforms over formal classification. Officials cited concerns that adding public institution oversight could overlap with existing supervisory structures and undermine the watchdog’s autonomy and expertise.

As conditions for reconsideration, the government ordered the Financial Supervisory Service to strengthen democratic oversight by its supervising ministry, including mandatory consultation on personnel and organizational changes, expanded management disclosure through ALIO, and full implementation of the Financial Consumer Protection Improvement Roadmap announced last year.

The Public Institution Management Committee plans to review progress on those measures and reassess the watchdog’s designation status in 2027.

Koo said that while public institution designation could enhance transparency and public accountability, it could also create inefficiencies if layered on top of the existing supervision system.

“There is concern that overlapping management structures could weaken autonomy and professional expertise,” Koo said.

Separately, the ministry said it will disclose, for the first time since enactment of the Public Institutions Act in 2007, a list of entities that met designation criteria but were not classified as public institutions, along with the reasons.

The committee also approved changes to designation categories for the Korea Broadcasting Advertising Corporation and the Korea Legal Protection and Welfare Foundation.

Koo said the expanded disclosures are intended to make public institution management more transparent and easier for citizens to understand, while strengthening trust in the public sector.

— Reported by Asia Today; translated by UPI

© Asia Today. Unauthorized reproduction or redistribution prohibited.

Original Korean report: https://www.asiatoday.co.kr/kn/view.php?key=20260129010013790

Source link

Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say

The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.

But an edited — and realistic — image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.

Department of Homeland Security Secretary Kristi Noem’s account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of U.S. citizens Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis

However, the White House’s use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or AI-edited images erodes public perception of the truth and sows distrust.

In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with Deputy Communications Director Kaelan Dorr writing on X that the “memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.

David Rand, a professor of information science at Cornell University, says calling the altered image a meme “certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media.” He said the purpose of sharing the altered arrest image seems “much more ambiguous” than the cartoonish images the administration has shared in the past.

Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or AI-edited imagery is just the latest tool the White House uses to engage the segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.

“People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”

All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White House’s social media team.

The creation and dissemination of altered images, especially when they are shared by credible sources, “crystallizes an idea of what’s happening, instead of showing what is actually happening,” said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher.

“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content … it is eroding the trust — even though I’m always kind of skeptical of the term trust — but the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”

Spikes said he already sees the “institutional crises” around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues.

Ramesh Srinivasan, a professor at UCLA and the host of the “Utopias” podcast, said many people are now questioning where they can turn to for “trustable information.” “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” he said.

Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, such as policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content — which AI generation tools can create with ease — “we’ve got a big, big set of challenges on our hands.”

An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests and interactions with citizens has already been proliferating on social media. After Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.

Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are “engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms such as ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as “fan fiction,” or engaging in “wishful thinking,” hoping that they’re seeing real pushback against the organizations and their officers.

Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “what’s real or not when it actually matters, like when the stakes are a lot higher.”

Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the “best-case scenario” would a viewer be savvy enough or be paying enough attention to register the use of AI.

This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace.

Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think that will become extensively adopted for at least another year.

“It’s going to be an issue forever now,” he said. I don’t think people understand how bad this is.”

Huamani writes for the Associated Press. AP writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.



Source link