Pragmatic idealist. Worked on Ubuntu Phone. Inkscape co-founder. Probably human.
1439 stories
·
12 followers

This Dallas Airline Was Just Named the Best in the Country

1 Share

For those who want the experience of flying private but at a more affordable price, Dallas-based JSX is increasingly becoming the way to do so. The industry—and the flying public—are taking notice. Travel and Leisure released its annual industry rankings last week, and JSX was named the top domestic airline in the nation—with a reader … Continued

The post This Dallas Airline Was Just Named the Best in the Country appeared first on D Magazine.



Read the whole story
tedgould
13 hours ago
reply
Texas, USA
Share this story
Delete

Surprising no one, new research says AI Overviews cause massive drop in search clicks

1 Share

Google's search results have undergone a seismic shift over the past year as AI fever has continued to escalate among the tech giants. Nowhere is this change more apparent than right at the top of Google's storied results page, which is now home to AI Overviews. Google contends these Gemini-based answers don't take traffic away from websites, but a new analysis from the Pew Research Center says otherwise. Its analysis shows that searches with AI summaries reduce clicks, and their prevalence is increasing.

Google began testing AI Overviews as the "search generative experience" in May 2023, and just a year later, they were an official part of the search engine results page (SERP). Many sites (including this one) have noticed changes to their traffic in the wake of this move, but Google has brushed off concerns about how this could affect the sites from which it collects all that data.

SEO experts have disagreed with Google's stance on how AI affects web traffic, and the newly released Pew study backs them up. The Pew Research Center analyzed data from 900 users of the Ipsos KnowledgePanel collected in March 2025. The analysis shows that among the test group, users were much less likely to click on search results when the page included an AI Overview.

Pew AI Overviews stats Credit: Pre Research Center

Pew reports that searches without an AI answer resulted in a click rate of 15 percent. On SERPs with AI Overviews, the rate of clicks to other sites drops by almost half, to 8 percent. Google has also, on several occasions, claimed that people click on the links cited in AI Overviews, but Pew found that just 1 percent of AI Overviews produced a click on a source. These sources are most frequently Wikipedia, YouTube, and Reddit, which collectively account for 15 percent of all AI sources.

And perhaps more troubling, Google users are more likely to end their browsing session after seeing an AI Overview. That suggests that many people are seeing information generated by a robot, and their investigation stops there. Unfortunately for these people, all forms of generative AI are prone to "hallucinations" that cause them to provide incorrect information. So more people could be walking away from a search with the wrong information.

AI overview on phone AI Overviews are integrated with Google's results, and they are appearing on more searches all the time. Credit: Google

This problem is unlikely to improve over time. Since launching AI Overviews, Google has repeatedly expanded the number of searches that get robot summaries. The Pew Research Center says that about 1 in 5 searches now have AI Overviews. Generally, the more words in a search, the more likely it is to trigger an AI Overview, and that's especially true for searches phrased as questions. The research shows that 60 percent of questions and 36 percent of full-sentence searches are answered by the AI.

This research provides more evidence that Google's use of AI is changing the way people gather information and interact with search results. The trends are bad for web publishing, but Google's profits have never been higher. Funny how that works.

Read full article

Comments



Read the whole story
tedgould
13 hours ago
reply
Texas, USA
Share this story
Delete

The Word

2 Shares

Read the whole story
tedgould
17 hours ago
reply
Texas, USA
zippy72
36 days ago
reply
FourSquare, qv
Share this story
Delete

Tesla skepticism continues to grow, robotaxi demo fails to impress Austin

1 Share

Tesla’s eroding popularity with Americans shows little sign of abating. Each month, the Electric Vehicle Intelligence Report surveys thousands of consumers to gauge attitudes on EV adoption, autonomous driving, and the automakers that are developing those technologies. Toyota, which only recently started selling enough EVs to be included in the survey, currently has the highest net-positive score and the highest “view intensity score”—the percentage of consumers who have a very positive view of a brand minus the ones who have a very negative view—despite selling just a fairly lackluster EV to date. Meanwhile, the brand that actually popularized the EV, moving it from compliance car and milk float to something desirable, has fallen even further into negative territory in July.

Just 26 percent of survey participants still have a somewhat or very positive view of Tesla. But 39 percent have a somewhat or very negative view of the company, with just 14 percent being unfamiliar or having no opinion. That’s a net positive view of -13, but Tesla’s view intensity score is -16, meaning a lot more people really don’t like the company compared to the ones who really do. The problem is also growing over time: In April, Tesla still had a net positive view of -7.

Tesla remained at the bottom of the charts when EVIR looked more closely into demographic data. Tesla was the least-positively viewed car company regardless of income, although the effect was most pronounced among those with incomes less than $75,000, as were the results based on geography (although suburbanites held it in the most disdain) and age (where those over 65 have the most haters).

Vinfast is the only other automaker with a negative net-positive view and view intensity score, but 92 percent of survey respondents were unfamiliar with the Vietnamese automaker or had no opinion about it.

When asked which brands they trusted, the survey data mostly mirrored the positive versus negative brand perception. Only Tesla and Vinfast have negative net trust scores, with Tesla also having the lowest “trust integrity score”—those who say they trust a brand “a lot” versus those who distrust that brand “a lot,” at -19.

At least Elon Musk’s car company can avoid the ignominy of coming last in perceptions of each EV brand’s safety. Musk regularly touted misleading statistics to create an image of a car company with unparalleled safety. But after dozens of fatal crashes involving Teslas, frequently with the involvement of the company’s often-investigated driver assistance systems, it seems much of the public has actually been paying attention. But just 52 percent think of Teslas as safe, the second-worst score (yes, after Vinfast).

Robotaxi rollout was a bust

EVIR also investigated attitudes toward AVs and robotaxis. Only 1 percent of the more than 8,000 people surveyed had ridden in a robotaxi and would do it again, the same percentage that would not care to repeat the experience. More than twice as many people (46 percent) say they would never consider riding in a robotaxi than say they would consider it (21 percent). And more than half somewhat (22 percent) or strongly (31 percent) believe the technology should not be legal.

Robotaxis have been important to Tesla for some time now. Forget making money by selling cars—instead for several years now Musk has told us the future of Tesla is humanoid robots doing manual labor and a giant fleet of robotaxis driving the streets, making every other car brand irrelevant. Austin, Texas, was to be ground zero for the Tesla robotaxi, thanks to a highly liberal attitude by conservative lawmakers toward private companies experimenting on public roads there.

But just under two-thirds (65 percent) of those surveyed by EVIR were unaware that Tesla had started demoing its autonomous vehicles in the city starting in late June. Only 3 percent considered themselves well-informed.

That lucrative autonomous future starts to look a little less likely once those survey respondents were given an excerpt of an article from The Wall Street Journal about the robotaxi rollout. The article included information that will be familiar to readers of Ars Technica, including the fact that Tesla relies on a camera-only system that can be blinded by sunlight, and after reading it, half of those surveyed said they were somewhat or much less interested in using a Tesla robotaxi. Even more—53 percent—were somewhat or much less convinced that Tesla’s robotaxis are safe.

And there’s little worse for an automaker than being perceived as unsafe.

Read full article

Comments



Read the whole story
tedgould
17 hours ago
reply
Texas, USA
Share this story
Delete

It’s “frighteningly likely” many US courts will overlook AI errors, expert says

1 Share

Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court.

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband's lawyer, Diana Lynch. That's a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on "two fictitious cases" to deny the wife's petition—which Watkins suggested were "possibly 'hallucinations' made up by generative-artificial intelligence"—as well as two cases that had "nothing to do" with the wife's petition.

Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband's response—which also appeared to be prepared by Lynch—cited 11 additional cases that were "either hallucinated" or irrelevant. Watkins was further peeved that Lynch supported a request for attorney's fees for the appeal by citing "one of the new hallucinated cases," writing it added "insult to injury."

Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars' request to comment, and her website appeared to be taken down following media attention to the case.

But Watkins noted that "the irregularities in these filings suggest that they were drafted using generative AI" while warning that many "harms flow from the submission of fake opinions." Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges' and courts' reputations and promote "cynicism" in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a "litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity."

"We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI," Watkins wrote.

Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife's petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court.

“Frighteningly likely” judge’s AI misstep will be repeated

John Browning, a retired justice on Texas' Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers "will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment."

Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it's "frighteningly likely that we will see more cases" like the Georgia divorce dispute, in which "a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order" or even potentially in "proposed findings of fact and conclusions of law."

"I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters," Browning told Ars.

According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can't afford attorneys to file more cases, potentially further bogging down courts.

Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren't happening every day just yet.

It's likely that a "few hallucinated citations go overlooked" because generally, fake cases are flagged through "the adversarial nature of the US legal system," he suggested. Browning further noted that trial judges are generally "very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for."

Henderson agreed with Browning that "in courts with much higher case loads and less adversarial process, this may happen more often." But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working.

While that's true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don't have energy or resources to discover and overturn errant orders.

Judges’ AI competency increasingly questioned

While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers' errors or even for using AI to research their own opinions.

Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures—with some even demanding to know which specific AI tool was used—but that solution has not caught on everywhere.

Even if all courts required disclosures, Browning pointed out that disclosures still aren't a perfect solution since "it may be difficult for lawyers to even discern whether they have used generative AI," as AI features become increasingly embedded in popular legal tools. One day, it "may eventually become unreasonable to expect" lawyers "to verify every generative AI output," Browning suggested.

Most likely—as a judicial ethics panel from Michigan has concluded—judges will determine "the best course of action for their courts with the ever-expanding use of AI," Browning's article noted. And the former justice told Ars that's why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems.

In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, "The Dawn of the AI Judge," Browning attempts to soothe readers by saying that AI isn't yet fueling a legal dystopia. And humans are unlikely to face "robot judges" spouting AI-generated opinions any time soon, the former justice suggested.

Standing in the way of that, at least two states—Michigan and West Virginia—"have already issued judicial ethics opinions requiring judges to be 'tech competent' when it comes to AI," Browning told Ars. And "other state supreme courts have adopted official policies regarding AI," he noted, further pressuring judges to bone up on AI.

Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions.

Judges must prepare to spot obvious AI red flags

Until courts figure out how to navigate AI—a process that may look different from court to court—Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system.

An overlooked part of educating judges could be exposing AI's influence so far in courts across the US. Henderson's team is planning research that tracks which models attorneys are using most in courts. That could reveal "the potential legal arguments that these models are pushing" to sway courts—and which judicial interventions might be needed, Henderson told Ars.

"Over the next few years, researchers—like those in our group, the POLARIS Lab—will need to develop new ways to track the massive influence that AI will have and understand ways to intervene," Henderson told Ars. "For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?"

Henderson also advocates for "an open, free centralized repository of case law," which would make it easier for everyone to check for fake AI citations. "With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations," Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls.

Dazza Greenwood, who co-chairs MIT's Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time.

He recommended that courts create "a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first." That way, lawyers will know that their work will "always" be checked and thus may shift their behavior if they've been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn't cost much—mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters.

Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if "shame and sanctions" are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it "gives both otherwise generally good lawyers and otherwise generally good technology a bad name." Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on.

Of course, there's no guarantee that the bounty system would work. But "would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally... convince lawyers who cut these corners that they should not cut these corners?"

In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings.

Any case number with "123456" in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. "For example, a cite to a purported Texas case that has a 'S.E. 2d' reporter wouldn't make sense, since Texas cases would be found in the Southwest Reporter," Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses.

Those red flags would perhaps be easier to check with the open source tool that Henderson's lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with.

"Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination," Browning said.

Judges already issuing AI-assisted opinions

Several states have assembled task forces like Greenwood's to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing "recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases" in that state.

Adopting the committee's recommendations could establish "long-term leadership and governance"; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow.

Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it's still too early to tell if the judges' code of conduct should be changed to prevent "unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making." That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found.

Notably, the committee's report also confirmed that there are no role models for courts to follow, as "there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems." Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force's work "concluded" and "resulted in the creation of the new standing committee on Emerging Technology," which offers general tips and guidance for judges in a recently launched AI Toolkit.)

"While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well," Browning said.

Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested.

In his upcoming "AI Judge" article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a "mini experiment" in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges' egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it.

"Regardless of the technological advances that can support a judge’s decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities—legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics," Browning wrote. "These qualities can never be replicated by an AI tool."

Read full article

Comments



Read the whole story
tedgould
1 day ago
reply
Texas, USA
Share this story
Delete

Cuts to public media will smash budgets of some local radio stations

1 Share
Abby DuFour, WCHG news reporter and afternoon host of the show <em>DuFour Du Jour</em>, cues up the next song in her broadcast at the station in Hot Springs, Va.

Congress voted to claw back federal funding to public media. Some of those hit hardest include community radio stations in areas that voted for the president.

(Image credit: Kristian Thacker for NPR)

Read the whole story
tedgould
2 days ago
reply
Texas, USA
Share this story
Delete
Next Page of Stories