Home Blog Page 153

ChatGPT suicide lawsuit: OpenAI faces seven lawsuits alleging the AI chatbot drove people to suicide and delusions

0

OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.

The lawsuits were filed Thursday in California state courts allege wrongful death, assisted suicide, involuntary manslaughter and negligence. Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.

The teenager, 17-year-old Amaurie Lacey began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on suicide methods.

“Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of Open AI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit says.

OpenAI did not immediately respond to a request for comment Thursday.

Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, praying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm.”

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center in a statement.

OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritised “emotional manipulation over ethical design.”

In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the lawsuits. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”

(Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)

Published – November 07, 2025 09:01 am IST

Source link

Exercise “Trains” the Immune System, New Research Reveals

0

An international team of researchers reports that the immune cells of older adults with a history of endurance training are more effective at combating inflammation. Regular physical activity not only benefits the muscles, lungs, and heart, but also enhances the body’s immune defenses. This conclusion comes from a study involving older adults with long-term experience […]

Source link

U.S. to block Nvidia’s sale of scaled-down AI chips to China: Report

0

White House did not immediately respond to Reuters’ request for a comment [File]
| Photo Credit: REUTERS

The White House has informed other federal agencies that it will not permit Nvidia to sell its latest scaled-down AI chips to China, The Information reported on Thursday, citing three people familiar with the matter.

Nvidia has provided samples of the chip to several of its Chinese customers, according to the report.

The chip, known as the B30A, can be utilised to train large language models when efficiently arranged in large clusters, a capability many Chinese companies require, the report added.

An Nvidia spokesperson told Reuters that the company has “zero share in China’s highly competitive market for datacenter compute, and do not include it in our guidance.”

White House did not immediately respond to Reuters’ request for a comment.

Nvidia is working on modifying the B30A’s design in hopes that the U.S. administration will reconsider its stance, the Information report said, citing two company employees.

The California-based company, however, has also been facing regulatory headwinds in China.

Beijing has recently issued guidance requiring all new data centre projects that receive any state funding to use only domestically developed chips, Reuters reported on Wednesday, citing sources familiar with the matter.

Data centres that are less than 30% complete will have to remove all installed foreign chips, or cancel plans to purchase them, while projects in a more advanced stage will be reviewed on a case-by-case basis, the sources added.

The guidance effectively shuts out Nvidia and its AI chips from a lucrative market segment, including advanced models under U.S. export controls that are nevertheless available in China via grey market channels.

Source link

Microsoft launches ‘superintelligence’ team targeting medical diagnosis to start

0

Microsoft plans to invest “a lot of money” on the project as well, said Mustafa Suleyman, the AI chief in charge [File]
| Photo Credit: REUTERS

Microsoft is forming a new team that wants to build artificial intelligence that is vastly more capable than humans in certain domains, starting with medical diagnostics, the executive leading the effort told Reuters.

Called the MAI Superintelligence Team, the project follows similar efforts by Meta Platforms, Safe Superintelligence Inc and others that have begun targeting technical leaps while garnering skepticism for their ability to deliver, absent new breakthroughs.

Microsoft plans to invest “a lot of money” on the project as well, said Mustafa Suleyman, the AI chief in charge. Meta this year offered $100 million signing bonuses to recruit famous AI talent. Suleyman declined to say if such offers or poaching attempts were on the table. However, he said Microsoft AI would continue to recruit from other top labs while staffing its new team with existing researchers and Karen Simonyan as chief scientist.

Microsoft’s effort comes with a twist. According to Suleyman, the company is not chasing “infinitely capable generalist” AI like some peers. The reason, he said, is he doubts that autonomous, self-improving machines could be controlled, despite research into how humanity might keep AI in check.

He said Microsoft has a vision for “humanist superintelligence,” or technology that could solve defined problems with a real-world benefit.

“Humanism requires us to always ask the question: does this technology serve human interests?” said Suleyman.

AI theorists and developers have long debated whether the technology may lead to imminent danger or poses no harm relative to problems such as machine-learned bias and trustworthiness.

Suleyman said he aims to focus the Microsoft team on specialist models that achieve what he called superhuman performance while posing “virtually no existential risk whatsoever.”

He gave as examples AI that solves battery storage or develops molecules, in a nod to AlphaFold, DeepMind’s AI models that can predict protein structures. Suleyman was a DeepMind co-founder. Suleyman said that for diagnosis, a domain long of interest to the AI field and one that Microsoft has focused on, the company has a “line of sight to medical superintelligence in the next two to three years.”

He said the effort is based on AI that reasons through problems and still would require breakthroughs. But if achieved, he said the AI would “increase our life expectancy and give everybody more healthy years, because we’ll be able to detect preventable diseases much earlier.”

Source link

This Everyday Pill Might Guard Against Schizophrenia

0

New research suggests the antibiotic doxycycline could help prevent schizophrenia in young people. Adolescents treated with the drug were significantly less likely to develop the condition later in life. The protective effect might come from doxycycline’s anti-inflammatory and brain-modulating properties. Common Antibiotic Shows Surprising Link to Schizophrenia Prevention A widely used antibiotic might help lower […]

Source link

OpenAI boss calls on governments to build own AI infrastructure

0

He cited severe compute constraints already forcing OpenAI and competitors to limit availability of their products and delay new features [File]
| Photo Credit: REUTERS

OpenAI CEO Sam Altman called on world governments Thursday to invest in AI infrastructure, as questions grow about whether the ChatGPT-maker, the world’s most valuable private company, can absorb artificial intelligence’s massive costs.

“What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well,” Altman wrote in a long post on X, clarifying OpenAI’s position amid growing scrutiny of the company’s ambitious spending plans.

The company behind ChatGPT was facing scrutiny after its chief financial officer Sarah Friar told a business conference Wednesday that the U.S. government could help attract the enormous investment needed for AI computing and infrastructure by guaranteeing loans to pay for the buildout.

After fierce criticism, the executive later retracted the statement, saying her point was clumsily explained, which Altman reiterated in his own post.

“We do not have or want government guarantees for OpenAI datacenters,” Altman wrote.

“We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market,” he added.

“If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work.”

The comments came as OpenAI faces questions about its financial trajectory.

OpenAI has become a highly pivotal company, with the AI race launched by the release of ChatGPT driving Wall Street to new records even as doubts grow about the broader health of the American economy.

Altman said the company expects to reach over $20 billion in annualised revenue this year, a significant accomplishment for a startup, and is looking at infrastructure spending commitments of approximately $1.4 trillion over the next eight years.

This includes a $300 billion partnership with Oracle and a $500 billion Stargate project with Oracle and SoftBank that was announced at the White House in January.

He projected that OpenAI revenue will grow to hundreds of billions of dollars by 2030, driven by as-yet-unreleased consumer devices, robotics, and AI-powered scientific discovery.

Given the strategic importance of the technology, Altman argued that building a “strategic national reserve of computing power” makes sense for governments, particularly as massive infrastructure projects take years to complete.

He cited severe compute constraints already forcing OpenAI and competitors to limit availability of their products and delay new features, warning that the risk of insufficient computing power outweighs the risk of overbuilding.

Source link

Zuckerbergs put AI at heart of pledge to cure diseases

0

Meta CEO Mark Zuckerberg, right, and his wife, Priscilla Chan [File]
| Photo Credit: AP

The Chan Zuckerberg Initiative, a nonprofit launched by Mark Zuckerberg and his wife aimed at curing all disease, on Thursday announced it was restructuring to focus on using artificial intelligence to achieve that goal.

The move narrows the focus of the philanthropic organisation founded in 2015 with a vow to devote most of the couple’s significant wealth to charitable causes, including social justice and voter rights.

Zuckerberg is among the high-profile tech figures who has backed away from diversity, equality and fact-checking initiatives after U.S. President Donald Trump took office in January.

The organisation this year ended its diversity efforts, curbed support of nonprofits that provide housing and stopped funding a primary school that gave education and health care to underserved children, according to media reports.

The philanthropic mission created by the Meta co-founder and his spouse, Priscilla Chan, said that its current priority involves scientific teams centralised in a facility called Biohub.

“This is a pivotal moment in science, and the future of AI-powered scientific discovery is starting to come into view,” Biohub said in a blog post.

“We believe that it will be possible in the next few years to create powerful AI systems that can reason about and represent biology to accelerate science.”

Biohub envisions AI helping advance ways to detect, prevent and cure diseases, according to the post.

The mission includes trying to model the human immune system, potentially opening a door to “engineering human health.”

“We believe we’re on the cusp of a scientific revolution in biology, as frontier artificial intelligence and virtual biology give scientists new tools to understand life at a fundamental level,” Biohub said in the post.

The first investment announced by the Zuckerbergs when the initiative debuted nearly a decade ago was for the creation of a Biohub in Silicon Valley where researchers, scientists and others could work to build tools to better study and understand diseases.

Shortly after it was established, the initiative bought a Canadian startup which uses AI to quickly read and comprehend scientific papers and then provide insights to researchers.

“Our multidisciplinary teams of scientists and engineers have built incredible technologies to observe, measure and program biology,” Biohub said of its progress.

Meta is among the big tech firms that have been pouring billions of dollars into data centres and more in a race to lead the field of AI.

Source link

Tiny Camel and Llama Proteins Show Promise for Brain Disorders

0

Tiny proteins from camels, llamas, and alpacas—known as nanobodies—may transform treatments for brain disorders like schizophrenia and Alzheimer’s. Their tiny size allows them to penetrate the brain more effectively and with fewer side effects than conventional antibody therapies. Tiny Camelid Proteins With Big Potential Nanobodies, which are tiny proteins found in members of the camelid […]

Source link

OpenAI seeks government backing to boost AI investments

0

Such guarantees would also dramatically expand OpenAI’s potential lender pool, as many banks and financial institutions face strict limits on high-risk lending [File]
| Photo Credit: REUTERS

ChatGPT creator OpenAI, the world’s largest private company, is asking the U.S. government to provide loan guarantees for its massive infrastructure expansion that will eventually cost more than $1 trillion.

Speaking at a Wall Street Journal business conference, OpenAI CFO Sarah Friar explained that government backing could help attract the enormous investment needed for AI computing and infrastructure, given the uncertain lifespan of AI data centers.

“This is where we’re looking for an ecosystem of banks, private equity, maybe even governmental,” Friar said.

Federal loan guarantees would “really drop the cost of the financing,” she explained, enabling OpenAI and its investors to borrow more money at lower rates to meet the company’s ambitious targets.

The proposal, unusual for a Silicon Valley tech giant, would theoretically reduce OpenAI’s borrowing costs since the government would absorb losses if the company defaulted.

Such guarantees would also dramatically expand OpenAI’s potential lender pool, as many banks and financial institutions face strict limits on high-risk lending.

OpenAI’s request for government support comes amid a massive spending spree on computing infrastructure, raising questions about how the company will recoup these investments.

By some estimates, OpenAI has committed to approximately $1 trillion in infrastructure deals this year alone, including a $300 billion partnership with Oracle and a $500 billion Stargate project with Oracle and SoftBank.

While the company expects revenues in the tens of billions this year, impressive for any startup, that figure falls far short of covering the computing costs required to power OpenAI’s advanced chatbots.

During the interview, Friar dismissed reports that OpenAI plans to go public soon.

“IPO is not on the cards right now,” she said, emphasizing that the company’s current priority is growth.

Recent media reports had suggested OpenAI was preparing for a public offering after completing a complex governance restructuring that would allow the company to accept public shareholders on Wall Street.

Source link

AI can’t live off free art forever

0

Every dataset that powers machine intelligence is built on human creativity. To treat that work as free fuel is to erode the very culture that makes intelligence, artificial or otherwise, worth having

Australia isn’t known for building world-leading artificial intelligence systems. But the island nation might just become the moral compass for how the rest of the democratic world approaches them. Last week, Australia took a stand on what may become the defining issue of the AI age: how machines learn, and at whose expense.

The story begins with a think tank most people barely know: Australia’s Productivity Commission. In August, it released a dense report titled ‘Harnessing Data and Digital Technology’. Buried in it was a radical idea to give AI companies a free pass to mine copyrighted content.

The commission called it a “text and data mining exception,” which simply means that books, journalism, songs, and art could be scraped to train AI models without asking permission or paying for it.

The commission’s reasoning sounded pragmatic: AI needs tomes of data to improve, and removing copyright barriers were aimed at helping the island nation catch up in the global tech race. But that logic misses a moral, and an economic, truth that AI can’t live off free art forever.

Every dataset that powers machine intelligence is built on human creativity. To treat that work as free fuel is to erode the very culture that makes intelligence, artificial or otherwise, worth having.

Predictably, the reaction was fierce. Authors, artists, and news organisations accused the government of handing their life’s work to Big Tech for nothing. The outrage reached Canberra, and Attorney General Michelle Rowland drew a line in the sand. “Australian creatives are not only world class — they are the lifeblood of Australian culture,” she said. “Technology’s advance must not come at their expense.”

With that, the government scrapped the proposal and set up the Copyright and AI Reference Group (CAIRG), tasked with designing new licensing models to ensure creators are paid when their work trains AI. A technical tweak on paper, but a seismic shift on the ground.

Australia has become the first major democracy to say, unequivocally, that human creativity is not public property just because it’s online. It’s a stance that will reverberate far beyond its borders. World democracies must take a stance on how AI companies can train their models using “text and data” from copyrighted material.

If AI systems depend on human-generated content, who controls that input, and how are the profits shared between parties?

Data is critical to AI, and restricting access can slow innovation. And today’s AI models have already consumed most of the internet’s text, art, and music. What’s left are the works of art and literature that are yet to be produced. It is high time that governments around the world wake up and put up a working mechanism in place where human creativity is fairly compensated by AI companies that unethically scrape the web to train their AI models.

By rejecting unrestricted scraping, Australia forces AI companies to grow up. If they want high-quality, up-to-date data, they’ll have to negotiate, license, and pay. That’s not a roadblock, but the next stage of evolution: a shift from extraction to cooperation.

The AI industry stands at a crossroads. It can keep hoovering up culture in a legal grey zone, inviting lawsuits and public distrust, or it can build something sustainable through an ecosystem where innovation is built on consent and compensation.

Australia’s move offers a blueprint for other democracies to follow. It shows that protecting creators isn’t anti-innovation. It’s how innovation earns legitimacy. Ultimately, the future of AI depends on not just processing power, but human creative input that enable machines to learn. And no matter how clever the machines get, they can’t live off free art forever.

Source link