Home Blog Page 58

Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’

0

 Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans’ personal data to train their AI models based on legitimate interest [File]
| Photo Credit: REUTERS

Privacy activists say proposed changes to Europe’s landmark privacy law, including making it easier for Big Tech to harvest Europeans’ personal data for AI training, would flout EU case law and gut the legislation.

The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues, which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19.

According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans’ personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data “in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data”.

“The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts,” Austrian privacy group noyb said in a statement.

Noyb is known for filing complaints against American companies such as Apple, Alphabet and Meta that have triggered several investigations and resulted in billions of dollars in fines.

“This would be a massive downgrading of Europeans’ privacy 10 years after the GDPR was adopted,” noyb’s Max Schrems said.

European Digital Rights, an association of civil and human rights organisations across Europe, slammed a proposal to merge the ePrivacy Directive, known as the cookie law that resulted in the proliferation of cookie consent pop-ups, into the GDPR.

“These proposals would change how the EU protects what happens inside your phone, computer and connected devices,” EDRi policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post.

“That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement,” she said.

The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.

Source link

OpenAI considers consumer health tools in push beyond core AI offerings: Report

0

The company declined to comment on the report [File]
| Photo Credit: REUTERS

OpenAI is weighing building consumer health products, including a generative AI-powered personal health assistant, as the ChatGPT maker aims to move beyond its core offerings, Business Insider reported on Monday, citing sources close to the company.

The company declined to comment on the report.

OpenAI’s healthcare push follows strategic hires, including Nate Gross, cofounder of physician network Doximity, as head of healthcare strategy in June, and former Instagram executive Ashley Alexander as vice president of health products in August.

At the HLTH conference in October, Gross said ChatGPT attracts about 800 million weekly active users, many seeking medical advice.

Tech giants such as Google, Amazon and Microsoft have long tried to give consumers control over their medical data, often with limited success.

Google shut its health record service in 2011 due to low traction, while Amazon wound down its Halo fitness tracker business in 2023. Microsoft’s HealthVault platform also failed to attract widespread adoption.

Source link

CoreWeave flags hit from data centre delay, shares fall

0

The company’s margins could come under pressure from surging prices for AI chips [File]
| Photo Credit: REUTERS

Nvidia-backed CoreWeave trimmed annual revenue forecast on Monday, hurt by a delay at a third-party data centre partner, taking the shine off a strong September quarter driven by burgeoning demand for AI cloud services.

Its shares fell more than 6% in extended trading, after Chief Financial Officer Nitin Agrawal forecast 2025 revenue between $5.05 billion and $5.15 billion.

That was lower than CoreWeave’s previous projection of $5.15 billion to $5.35 billion, and analysts’ estimate of $5.29 billion, according to data compiled by LSEG.

The customer affected by the delay, however, agreed to extend the contract’s expiration date, keeping the deal’s total value intact, the company said without disclosing its name.

CoreWeave has cemented its position as a key infrastructure partner for the biggest names in technology, landing a string of multibillion-dollar agreements, including a $14 billion deal with Meta Platforms and a new $6.5 billion contract with ChatGPT-maker OpenAI, which underscore the voracious appetite for AI-powering graphics processing units.

Its third-quarter revenue more than doubled to $1.36 billion, beating the estimate of $1.29 billion.

Echoing broader cloud industry trends, Agrawal said capital spending next year would more than double compared to 2025, when it expects to spend $12 billion to $14 billion.

Once a large-scale Ethereum miner, CoreWeave has reinvented itself by turning its powerful GPU infrastructure from crypto mining rigs into the backbone of a fast-growing cloud platform powering today’s AI revolution.

Its aggressive expansion plans, however, hit a snag in late October when crypto miner Core Scientific terminated a $9 billion all-stock merger agreement.

CoreWeave stock has more than doubled since it went public earlier this year at $40 apiece, valuing the company at more than $50 billion.

Its adjusted operating income margin was down at 16% in the third quarter, from 21% a year earlier.

The company’s margins could come under pressure from surging prices for AI chips, mounting competition for computing power and the steep costs of expanding its cloud infrastructure.

Source link

SoftBank’s OpenAI wager in focus as analysts upgrade share price target

0

In April, SoftBank said it would lead a funding round of up to $40 billion in OpenAI, developer of ChatGPT, at a valuation of $300 billion [File]
| Photo Credit: REUTERS

Technology investor SoftBank Group reports second quarter earnings results on Tuesday in the midst of feverish investment in artificial intelligence that has sent its share price soaring.

But SoftBank’s belief in AI comes with risks amid growing concern of an “AI bubble” that could see SoftBank overextended in companies at eye-watering valuations, repeating some of its debt-fuelled investment mistakes of the past.

For now, analysts have re-rated SoftBank’s share price target up as the wave of investment in artificial intelligence infrastructure such as data centers continues apace and the frontrunners in AI development, such as SoftBank investee OpenAI, project rapid growth.

In April, SoftBank said it would lead a funding round of up to $40 billion in OpenAI, developer of ChatGPT, at a valuation of $300 billion. In October, a source told Reuters SoftBank was among a consortium of investors acquiring $6.6 billion worth of shares from OpenAI employees at a yet higher valuation of $500 billion.

SoftBank’s shares closed at a record 27,315 yen per share in late October, more than quadruple their price in early April, although they have pared some gains since, closing at 22,255 yen per share on Monday.

The share price now appears to be priced in relation to SoftBank’s exposure to OpenAI, after years of tracking shares of Alibaba, Jefferies analyst Atul Goyal wrote in a note. SoftBank no longer has a meaningful stake in Alibaba.

While retail investors view SoftBank as a higher risk and volatility play on artificial intelligence and OpenAI, institutional investors “recognise the momentum but remain cautious about extrapolating OpenAI’s potential,” Goyal wrote.

SoftBank founder and Chief Executive Masayoshi Son said in June that he was “all in” on OpenAI in a bid to become the biggest platform provider for “artificial super intelligence” within the next 10 years.

But whether OpenAI and other artificial intelligence firms can generate the profits worthy of such valuations remains to be seen. Losses are mounting at OpenAI, sources told Reuters in October.

Separately, in September a source told Reuters that SoftBank’s plans to set up a joint venture with OpenAI to bring artificial intelligence services to corporate customers in Japan were significantly behind schedule.

The joint venture eventually launched last week.

Son has form in both making and losing fortunes.

He rode the boom and bust of the dotcom bubble in 2000, while SoftBank’s Vision Fund investment vehicles, launched in 2017 and 2019 and totalling over $170 billion in committed capital, have barely broken even since inception.

SoftBank is expected to post a net profit of 207 billion yen ($1.37 billion) in the July-September quarter, according to the average estimate of three analysts polled by LSEG, although its earnings are known for large and hard-to-predict swings.

Source link

AI agents open door to new hacking threats

0

Cybersecurity experts are warning that artificial intelligence agents, widely considered the next frontier in the generative AI revolution, could wind up getting hijacked and doing the dirty work for hackers.

AI agents are programs that use artificial intelligence chatbots to do the work humans do online, like buy a plane ticket or add events to a calendar.

But the ability to order around AI agents with plain language makes it possible for even the technically non-proficient to do mischief.

“We’re entering an era where cybersecurity is no longer about protecting users from bad actors with a highly technical skillset,” AI startup Perplexity said in a blog post.

“For the first time in decades, we’re seeing new and novel attack vectors that can come from anywhere.”

These so-called injection attacks are not new in the hacker world, but previously required cleverly written and concealed computer code to cause damage.

But as AI tools evolved from just generating text, images or video to being “agents” that can independently scour the internet, the potential for them to be commandeered by prompts slipped in by hackers has grown.

“People need to understand there are specific dangers using AI in the security sense,” said software engineer Marti Jorda Roca at NeuralTrust, which specialises in large language model security.

Meta calls this query injection threat a “vulnerability.” OpenAI chief information security officer Dane Stuckey has referred to it as “an unresolved security issue.”

Both companies are pouring billions of dollars into AI, the use of which is ramping up rapidly along with its capabilities.

Query injection can in some cases take place in real time when a user prompt, “book me a hotel reservation,” is gerrymandered by a hostile actor into something else: “wire $100 to this account.”

But these nefarious prompts can also be hiding out on the internet as AI agents built into browsers encounter online data of dubious quality or origin, and potentially booby-trapped with hidden commands from hackers.

Eli Smadja of Israeli cybersecurity firm Check Point sees query injection as the “number one security problem” for large language models that power AI agents and assistants that are fast emerging from the ChatGPT revolution.

Major rivals in the AI industry have installed defenses and published recommendations to thwart such cyberattacks.

Microsoft has integrated a tool to detect malicious commands based on factors including where instructions for AI agents originate.

OpenAI alerts users when agents doing their bidding visit sensitive websites and blocks proceeding until the software is supervised in real time by the human user.

Some security professionals suggest requiring AI agents to get user approval before performing any important task – like exporting data or accessing bank accounts.

“One huge mistake that I see happening a lot is to give the same AI agent all the power to do everything,” Smadja told AFP.

In the eyes of cybersecurity researcher Johann Rehberger, known in the industry as “wunderwuzzi,” the biggest challenge is that attacks are rapidly improving.

“They only get better,” Rehberger said of hacker tactics.

Part of the challenge, according to the researcher, is striking a balance between security and ease of use since people want the convenience of AI doing things for them without constant checks and monitoring.

Rehberger argues that AI agents are not mature enough to be trusted yet with important missions or data.

“I don’t think we are in a position where you can have an agentic AI go off for a long time and safely do a certain task,” the researcher said.

“It just goes off track.”

Published – November 11, 2025 09:30 am IST

Source link

Remembering V. Rajaraman, a tireless evangelist of computer education in India

0

V. Rajaraman
| Photo Credit: V. Rajaraman

Among the few sectors of science and technology in which India has done remarkably well is software programming and services. Computer programming that started in a small way in academic institutions in the 1960s developed into a formidable industry within a few decades. This became possible due to the rapid spread of programming skills, even before full-fledged graduate and postgraduate courses in computer science were introduced.

These efforts were pioneered at IIT-Kanpur under Vaidyeswaran Rajaraman (1933-2025). His contributions to this field are so immense that there’s hardly any computer programmer who has not read a textbook penned by him. He was awarded the Padma Bhushan in 1998 in recognition of this efforts. He passed away on November 8.

Prof. Rajaraman began his career at a time when computers had to be designed, fabricated, and programmed for specific purposes. As a young student of electrical communication engineering at the Indian Institute of Science, Bengaluru, in the mid-1950s, he happened to work on a project to design an analogue computer led by Vincent C. Rideout, a visiting professor from the University of Wisconsin. Rideout had brought with him components, sub-assemblies, and operational amplifiers to fabricate the computer, which was later named the ‘Philbrick-Rideout Electronic Differential Analyser’, or PREDA. After Rideout left, Prof. Rajaraman took charge of this machine, added new features, and made it useful for researchers at the institute.

He then went to pursue a master’s degree at the Massachusetts Institute of Technology and a doctorate from Wisconsin, after which he returned to India and joined the newly established IIT-Kanpur. The total number of faculty members then was just seven. One of the first courses that Prof. Rajaraman taught was on carpentry. When an IBM mainframe arrived in July 1964, it became the nucleus of the computer centre.

Since academic courses in computer science had yet to begin, the IBM machine was used to train programmers from other research centres and industry. Prof. Rajaraman conducted 10-day intensive courses in Fortran, a programming language developed in the 1950s. The course helped the computer centre forge links with industry. Prof. Rajaraman undertook consulting assignments for Tata Consultancy Services (TCS), which had just started its operations. Most postgraduates from IIT-Kanpur joined TCS and subsequently another start-up, HCL Tech. Many of the undergraduate and postgraduate students of Prof. Rajaraman later became CEOs and founders of software firms in the decades that followed.

While conducting short-term courses and teaching regular students, Prof. Rajaraman found there were no books available for them. He put together his notes on Fortran programming and got them printed as a booklet in 1968. It was sold in the campus bookstore for ₹5. It became so popular that outsiders would come to the campus to buy it. This prompted Prof. Rajaraman to approach publishers so that a book could be printed. Academic publishers, however, were not too keen because the subject was not a part of any course.

Prentice Hall eventually agreed and was surprised when 3,000 copies of the book were sold in the first year. Prof. Rajaraman had put in a precondition: that the price of the book should be lower than the cost of photocopying it. Therefore, the book was published on low-quality paper and priced at ₹15. Then, Prof. Rajaraman wrote books on numerical techniques, digital logic, and other subjects. All the books became bestsellers, selling in the lakhs over the years and making Prof. Rajaraman a household name in the programming community.

After successfully running the M. Tech programme in computer science, Prof. Rajaraman lobbied for a BTech in computer science. The IIT-Kanpur governing authorities grudgingly introduced the course with only 20 seats in 1979. As the IIT law had no provision for a department of computer science, the course was run in the electrical engineering department until the law could be amended. Gradually, other IITs and universities started independent departments of computer science and engineering. In the 1980s, when software exports were becoming an industry, Prof. Rajaraman, as the head of the computer manpower committee, made far-reaching recommendations that resulted in new courses, such as the three-year Master of Computer Applications. In 1982, Prof. Rajaraman returned to IISc, where he headed the Supercomputer Education and Research Centre until 1994.

Prof. Rajaraman actively participated in all the eras of the computer age in India, from analogue machines to supercomputers. Despite his long contributions as a teacher, policymaker, industry consultant, and author, however, he also shunned the limelight and continued his pursuits even after he turned 90. His latest book was published in June 2024.

Dinesh C. Sharma is a journalist based in New Delhi, and author of The Outsourcer: The Story of India’s IT Revolution.

Source link

Scientists Finally Figure Out How to Get CBD to the Brain for Pain Relief

0

A new study reveals a breakthrough method for getting CBD into the brain, easing nerve pain in mice without side effects. Using CBD-infused oils or lotions might seem like a simple, low-risk way to ease pain, yet scientists still have much to learn about how CBD affects the nervous system. In the past ten years, […]

Source link

What happens when public knowledge is created on private infrastructure?

0

Over the past year, a considerable amount of recognition for machine learning (ML) has gone to researchers working in or alongside large technology firms, even as recent advances in artificial intelligence (AI) have been financed and built on corporate infrastructure.

In 2024, the Nobel Foundation awarded the physics prize to John Hopfield and Geoffrey Hinton for contributions that enabled learning with artificial neural networks, and the chemistry prize to Demis Hassabis and John Jumper for protein structure prediction (alongside David Baker’s computational design). Mr. Hassabis and Mr. Jumper were employed at Google DeepMind at the time of the award; Mr. Hinton had spent a decade at Google before departing in 2023. These affiliations don’t erase the laureates’ academic histories but they do indicate where prize-level research is now being performed.

This change rests on material conditions as well as ideas. State-of-the-art models depend on large computing clusters, curated data, and engineering teams. Google’s programme to develop tensor-processing units (TPU) for its data centres shows how fixed capital can become a scientific input rather than only an information technology cost. Microsoft’s multiyear financing and Azure supercomputers for OpenAI reflect the same political economy from a different angle.

Case for public access

Any research with public provenance should return to the public domain. In this context, public money has supported early theoretical work, academic posts, fellowships, shared datasets, publishing infrastructure, and often the researchers themselves. In parallel, the points at which the value became excludable lay increasingly downstream: with respect to computing resources (shortened as compute), this includes rights to data and code, the ability to deploy models at scale, and decisions to release or withhold weights. This helps explain why recent Nobel laureates have been situated in corporate laboratories and why frontier systems are predominantly trained on private cloud systems.

In the 20th century, firms such as Bell Labs and IBM hosted prize-winning basic research. However, much of the knowledge then moved through reproducible publications and open benchmarks. Today, reproducing the work of Mr. Jumper, for example, can require large compute budgets and specialised operations expertise. As a result the concern isn’t only that corporations receive prizes but that the path from a public insight to a working system is infrastructure and contracts controlled by a few firms.

The involvement of public funds should thus create concrete obligations at points where technology becomes enclosed for private control. If an academic laboratory accepts a public grant, the deliverables should include the artefacts that make the work usable, including the training code, evaluation suites, and weights in the AI models to be released under open licences. If a public agency buys cloud credits or commissions model development, procurement should require that the benchmarks and improvements flow back to the commons rather than become locked into a vendor.

Remove bottlenecks

The argument isn’t that corporate laboratories can’t do fundamental science; they clearly can. The claim is that public policy should reduce the structural advantages of private control. Consider the release of Google DeepMind’s AlphaFold 2, which, together with its code and public access to predictions, allowed researchers beyond the originating lab to run the system on (reasonably) standard hardware, retrieve large numbers of precomputed structures, and integrate their results into routine workflows. All this work was supported by public institutions that were willing to host and maintain the resources.

Where the corporate stack is indispensable, such as when training frontier models (with billions or trillions of parameters), claims about ‘responsible release’ often ironically translate to a closed release. Instead, a more consistent position should be to link risk management to a structured model of openness — perhaps one that includes staged releases, access to weights, open penetration testing tools, and a clear separation between safety rationales and business models — rather than allow private entities to resort to blanket secrecy in the name of safety.

The same logic applies to compute: that is, if computing resources become a scientific bottleneck, they should be treated as a public utility. National and regional compute commons should allocate resources for free or at-cost to academic groups, nonprofits, and small firms, and qualify them on open deliverables and safety practices. The ultimate goal is to restore the ability of public institutions to reproduce, test, and extend leading ML work without having to seek corporate permission. Without such a commons, however, publicly funded ideas will continue to be turned into working systems on private clouds and returned to the public as expensive information products.

Indeed, while it’s tempting to treat the entities employing the laureates and funding pipelines as separate issues, one symbolic and the other structural, they’re connected by the computing resources. The fact that the Nobel laureates worked at Google DeepMind reflects where teams with ML scientists, domain experts, data, and compute now operate. Likewise, the fact that the most visible systems of the past two years were trained on Microsoft Azure under a financing agreement explains who could attempt such training. Both facts reflect underlying resource concentrations.

Beyond industry vs academia

Public agencies’ response should be direct — by, say, tying funding to openness in grants and procurement and requiring detailed funding disclosures and compute-cost accounting in research papers. Where full openness would create unacceptable risks, agencies can use equity or royalties to fund compute and data commons that support the wider ecosystem. For corporate laboratories, on the other hand, their credibility should rest on measurable contributions to the commons.

Journalists and the publics should also move beyond an ‘industry versus academia’ framing.

The relevant questions are who sets the research agenda, who controls infrastructure, who can reproduce results, and who benefits from deploying the resulting AI models. Interpreting the 2024 Nobel Prizes as industry victories alone would miss the point that the knowledge base is cumulative and relies on public inputs, while the capacity to operationalise that knowledge is clustered. Articulating this pattern allows us to recognise scientific merit while demanding reforms that ensure public inputs produce public returns — in code, data, weights, benchmarks, and access to compute.

To be sure, the central conclusion isn’t resentment about corporate salaries but responding to the fact that breakthroughs are increasingly occurring at the intersection of public knowledge and private infrastructure. The policy programme should be to reunite the layers where public and private enterprises diverge — artefacts, datasets, and compute — and to bake this expectation into contracts and norms that govern research.

In these conditions, future awards can be celebrated with corresponding public benefit because the outputs that make the science usable will be returned to the public.

Published – November 11, 2025 06:45 am IST

Source link

New Study Links Specific Gut Bacteria to Common Heart Disease

0

New research from Seoul scientists reveals how gut microbes may influence the development of coronary artery disease, the world’s leading killer. Nearly 20 million people lose their lives each year to cardiovascular diseases, which remain the top cause of death worldwide. While genetics and lifestyle factors influence how these conditions develop and how severe they […]

Source link

Why Your Daily Fish Oil Supplement Might Not Work As Well As You Think

0

Loss of ALOX15, which frequently occurs in colorectal tumors, reduces the cancer-preventive benefits of fish oil. An estimated 19 million adults in the United States regularly take fish oil supplements in hopes of improving their health. These popular supplements are rich in omega-3 fatty acids, specifically eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which have […]

Source link