Home Blog Page 204

Mayo Clinic Discovers Genetic Twist That Rewrites Rules on Common Liver Disease

0

Scientists at Mayo Clinic discovered a rare MET gene mutation that can singlehandedly cause fatty liver disease. Researchers at Mayo Clinic’s Center for Individualized Medicine have identified a rare genetic change that can directly cause metabolic dysfunction-associated steatotic liver disease (formerly known as nonalcoholic fatty liver disease). Previously, scientists believed this condition arose from a […]

Source link

Scientists Discover a Diet That Burns Fat Like Cold Exposure, Leading to Significant Weight Loss

0

By tweaking just two amino acids in the diet, researchers found a way to mimic the fat-burning effects of cold exposure. Shivering in the cold is hardly enjoyable, yet for some people, it carries an appealing side effect—the body uses more energy to stay warm than it does in comfortable temperatures. Multiple studies have shown […]

Source link

Blocking One Fat Molecule Could Save Your Kidneys

0

Ceramides were identified as the molecular culprits behind acute kidney injury, damaging mitochondria and leading to organ failure. Blocking ceramide metabolism completely protected kidneys in mice, offering hope for treating AKI and related diseases. Ceramides a Key Trigger for Acute Kidney Injury Acute kidney injury (AKI) is a sudden loss of short-term kidney function that […]

Source link

Why AI models struggle to discover new drugs

0

In November 2020, as the world battled the COVID-19 pandemic, a different kind of breakthrough captured global attention. Google DeepMind announced that its AlphaFold model had solved the protein-folding problem, one of biology’s most stubborn puzzles. The announcement was hailed as the scientific equivalent of a moon landing. Newsrooms called it a revolution that could bring new medicines to market faster than ever before.

But half a decade later, the flood of new cures has not materialised. Despite billions of dollars being invested in artificial intelligence (AI), drug discovery remains a slow and expensive process. This paradox lies at the heart of what analysts Jack Scannell, Alex Blanckley, Helen Boldon, and Brian Warrington called Eroom’s Law in a 2012 paper.

Quantity-quality mismatch

When Gordon Moore predicted in 1965 that computing power would double every two years while costs halved, he captured the astonishing pace of progress in electronics — a rule that came to be called Moore’s Law. But in medicine, the opposite has happened. Eroom’s Law (‘Moore’ spelt backwards) observes that the number of new drugs discovered per billion dollars spent has been falling steadily for decades.

Today, it costs several times more to bring a drug to market than it did in the 1970s, despite the availability of vastly superior computers, labs, and algorithms. In short, the chips have raced ahead but the pills have slowed down.

In drug discovery, every new treatment begins with a hypothesis, an educated idea or guess about how a molecule might influence disease. For decades, the real constraint has never been the quantity of hypotheses but the quality. Even before the advent of AI, researchers generated millions of plausible ideas, most of which led nowhere. With today’s AI systems, that number has grown to billions, yet the quality of hypotheses has not improved. Algorithms can exponentially increase the quantity of hypotheses but cannot enhance the quality by infusing it with intuition or imagination. The leap from quantity to quality remains a distinctly human privilege.

Creativity and chaos

AI using deep learning techniques, such as AlphaFold, thrives on patterns where clear, well-defined relationships are hidden within data. The protein-folding problem suited this perfectly. By 2015, scientists had already mapped over 1.5 lakh protein structures through five decades of human effort using X-ray crystallography, fluorescence spectroscopy, and protein nuclear magnetic resonance spectroscopy.

There was a known question, a vast dataset, and an idea of what a correct answer — all conceptualised by humans — should look like.

AlphaFold’s success was thus akin to a brilliant student topping a national entrance exam, such as the NEET or UPSC. The questions were difficult but predictable; the syllabus was vast but well known; and years of human groundwork had built the coaching material. With enough computational practice, the student could achieve top ranks.

Drug discovery, however, is not an examination; it is an act of exploration. It resembles a cricket talent scout trying to spot a future Virat Kohli in a dusty village ground for his IPL team or a political analyst attempting to predict who might become India’s next prime minister. There is no fixed pattern, no set syllabus, and no reliable coaching manual. On the other hand, randomness dominates in the wilderness in which drug discovery operates.

Accidents v. AlphaFold

Penicillin was discovered because Alexander Fleming forgot to cover a petri dish. Insulin was discovered through a series of messy experiments on dogs, conducted by Frederick Banting and Charles Best, who were simply trying to isolate pancreatic extracts. Paracetamol originated from a 19th-century misidentification in a laboratory notebook and metformin was studied for the treatment of influenza before its role in diabetes was understood.

Today’s world is also far more ethical and careful, rightly so. Every molecule is required to pass through stringent preclinical tests and multi-phase clinical trials before reaching patients. This caution while essential has also slowed the journey of discovery. Earlier scientists could test wild ideas with relative freedom; today’s researchers navigate mountains of paperwork and risk assessments. So even when AI proposes a promising molecule, the path to a prescription bottle remains a long and arduous marathon.

AlphaFold could succeed in cracking a computational challenge because it was solving a bounded problem: one where rules existed and human scientists had already mapped the territory. To be sure, AI will continue to reshape various aspects of medicine, including screening, clinical trial design, and drug repurposing. But expecting it to create or develop new cures single-handedly is folly. AI excels when guided by questions that humans already know how to ask and verify, thus ensuring its answers are accurate and reliable. More broadly, AI can reproduce knowledge at a faster pace but not imagine or create it. So while it will continue to reshape various aspects of medicine, including screening, clinical trial design, and drug repurposing, expecting it to create or develop new cures single-handedly would be folly.

As history shows, every great leap in medicine, from insulin to paracetamol, began with a human mind willing to wander beyond the data.

(Note: AI’s capabilities described here are as of November 2025.)

Dr. C. Aravinda is an academic and public health physician. The views expressed are personal.

Published – November 12, 2025 04:36 pm IST

Source link

Google introduces secure platform Private AI Compute for processing AI tasks

0

FILE PHOTO: Google has announced a new cloud platform that allows users to access the most advanced Gemini models on their devices securely.
| Photo Credit: Reuters

Google has announced a new cloud platform that allows users to access the most advanced Gemini models on their devices securely. Called Private AI Compute, the platform works similar to Apple’s Private Cloud Compute, acting like “one seamless Google stack,” powered by the Tensor Processing Units or TPUs. 

The platform processes the same type of sensitive data that is expected during on-device processing. However, since AI tools need more compute, it isn’t possible to process on-device. 

The TPUs depend on an AMD-based Trusted Execution Environment (TEE) which encrypts and isolates the memory from even Google. 

In a blog posted by the company, Google also said that Private AI Compute helps extended capabilities as well like Magic Cue, a tool which offers timely suggestions on the latest Pixel 10 phones. 

Additionally, the Recorder app on the Pixel will also be able to summarise transcriptions across multiple languages.

Source link

AMD expects profit to triple by 2030, data center chip market to grow to $1 trillion

0

FILE PHOTO: AMD said that it expected annual data center chip revenue of $100 billion within the next five years, and its earnings to more than triple.
| Photo Credit: Reuters

Advanced Micro Devices said on Tuesday that it expected annual data center chip revenue of $100 billion within the next five years, and its earnings to more than triple.

The Santa Clara, California-based chip designer’s shares were up 4% in choppy post-market trading, after closing down 2.7% at $237.52. The stock has risen 16% since October 6, when the company signed a lucrative multiyear deal with OpenAI that would bring in tens of billions of dollars in annual revenue.

While the deal is unlikely to dent Nvidia’s dominance in AI chipmaking, it is seen as a big vote of confidence in AMD’s chips, and the company’s bullish financial projections on Tuesday should help assuage investor concern over AMD’s ability to claw away business.

AMD expects the market for the company’s data center chips to grow to $1 trillion by 2030, CEO Lisa Su said at its analyst day – its first such event in three years – in New York. Artificial intelligence will drive much of the growth to the trillion-dollar figure. That market includes AMD’s plain processor and networking chips, along with its specialized AI chips, Su said.

“It’s an exciting market,” Su said. “There’s no question, data center is the largest growth opportunity out there, and one that AMD is very, very well positioned for.”

In the next three to five years, AMD expects 35% growth across its entire business each year and 60% in its data center business, finance chief Jean Hu said at the analyst day.

The company also expects earnings to rise to $20 a share in the same three-to-five-year period. LSEG estimates peg AMD’s 2025 profit at $2.68 per share.

Jensen Huang, CEO of AMD archrival Nvidia, has said the broader AI infrastructure market will grow to $3 trillion to $4 trillion by 2030.

MORE SMALL M&A EXPECTED

AMD’s next-generation MI400 series of AI chips is set to launch in 2026 and include several variants designed for scientific applications and for generative AI. Along with the MI400 chips, AMD is also planning to launch a complete server rack, similar to a product Nvidia sells called the GB200 NVL72.

In her opening remarks Su highlighted the company’s recent AI-related acquisitions, including the server builder ZT Systems and a slew of smaller software companies. AMD has built “an M&A machine,” Su said.

In recent months, AMD has acquired a batch of startups that focus on building software needed to run AI applications. On Monday, AMD said it bought MK1. The plan is to ensure AMD has access to the appropriate software and the people it needs to build its AI capabilities, Chief Strategy Officer Mat Hein told Reuters in an interview.

“We’ll continue to do AI software tuck-ins,” said Hein. The chip designer forecast fourth-quarter revenue that topped Wall Street estimates. Demand for AI chips gave AMD executives a reason for optimism about the remainder of the year. The company’s data center CPU business has also benefitted from the surge in AI-related spending.

Source link

Google says it will invest around $6.4 billion in cloud infrastructure in Germany

0

FILE PHOTO: Alphabet’s Google said that it will invest 5.5 billion euros ($6.41 billion) in Germany in the coming years.
| Photo Credit: Reuters

Alphabet’s Google said on Tuesday that it will invest 5.5 billion euros ($6.41 billion) in Germany in the coming years in a push to expand its infrastructure and data centre capacity in Europe’s largest economy.

The plans include a new data centre in Dietzenbach, close to Frankfurt, Google said, according to information it provided ahead of a press conference in Berlin.

Source link

OpenAI used song lyrics in violation of copyright laws, German court says

0

FILE PHOTO: ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday.
| Photo Credit: Reuters

OpenAI’s chatbot ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday, in a closely watched case against the U.S. firm over its use of lyrics to train its language models.

The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer’s hits “Maenner” and “Bochum”.

The case was brought by German music rights society GEMA, whose members include composers, lyricists and publishers, in another sign of artists around the world fighting back against data scraping by AI.

Presiding judge Elke Schwager ordered OpenAI to pay damages for the use of copyrighted material, without disclosing a figure.

GEMA legal advisor Kai Welp said GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated.

COPYRIGHT INFRINGED

OpenAI had argued that its language models did not store or copy specific training data but, rather, reflected what they had learned based on the entire training data set.

Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.

However, the court found that both the memorisation in the language models and the reproduction of the song lyrics in the chatbot’s outputs constitute infringements of copyright exploitation rights, according to a statement on the ruling.

POTENTIAL PRECEDENT

The outcome of the case could set a precedent in Europe for how AI companies use copyrighted materials.

“The internet is not a self-service store, and human creative achievements are not free templates,” said GEMA CEO Tobias Holzmueller. “Today, we have set a precedent that protects and clarifies the rights of authors: even operators of AI tools such as ChatGPT must comply with copyright law.”

The decision can be appealed.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI said. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses and developers in Germany that use our technology every day.”

Earlier this year, leading Bollywood music labels asked a New Delhi court to join a copyright lawsuit against OpenAI over alleged unauthorised use of sound recordings to train AI models, underscoring global concerns about AI and music rights.

Source link

Google Cloud to expand support for AI infrastructure in India, partners with IIT Madras’ AI4Bharat

0

FILE PHOTO: Google Cloud and Google DeepMind have announced a partnership with IIT Madras to support the launch of Indic Arena.
| Photo Credit: Reuters

Google Cloud and Google DeepMind have announced a partnership with IIT Madras to support the launch of Indic Arena. The platform, meant to benchmark and rank AI models on tasks around Indian languages, is run by AI4Bharat, a research lab within IIT Madras.

Google Cloud will be providing cloud credits to help power the platform. 

“At AI4Bharat, our mission is to build AI for India’s specific needs. A critical part of this is having a neutral, standardized benchmark to understand how models are performing across our many languages,” said Mitesh Khapra, associate professor, IIT Madras.

The move is under Google Cloud’s larger plan to expand local hardware capacity for customers in India. Powered by Google’s AI Hypercomputer architecture with the latest Trillium TPUs, the goal is to help more Indian businesses and public sector organisations train and offer Gemini’s most advanced AI models here. 

Google Cloud also said that they plan to roll out the most powerful Gemini AI models in India with full data residency support which will help run batch support for Gemini 2.5 Flash for high-volume, non-real-time AI tasks at lower costs and Document AI which is currently in preview and helps automate document processing on a large scale.

Source link

New Vitamin D Strategy Cuts Second Heart Attack Risk in Half

0

A new study reveals that a personalized, monitored approach to vitamin D3 supplementation after a heart attack can dramatically cut the risk of a second heart attack. A new study from heart specialists at Intermountain Health in Salt Lake City reports that a personalized method of vitamin D3 supplementation can greatly lower the chances of […]

Source link