The move followed talks with the company, which was found in breach of digital competition rules over its “pay for privacy” system earlier this year [File]
| Photo Credit: REUTERS
Meta will allow European users of Facebook and Instagram to share less data and see fewer personalised ads after it was fined for breaking EU digital rules, Brussels said Monday.
The European commission said the U.S. tech giant undertook to make the option available from January to settle a legal dispute over its “pay or consent” system that saw it hit with a 200-million-euro ($233 million) fine.
“Meta will give users the effective choice between: consenting to share all their data and seeing fully personalised advertising, and opting to share less personal data for an experience with more limited personalised advertising,” the commission said.
It was the “first time” that such a choice was offered on Meta’s social networks, the body that acts as the 27-nation bloc’s digital and antitrust regulator said.
The move followed talks with the company, which was found in breach of digital competition rules over its “pay for privacy” system earlier this year.
Under the system, which has been vehemently criticised by rights groups, users have to pay to avoid data collection, or agree to share their data with Facebook and Instagram to keep using the platforms for free.
A commission probe concluded in April that Meta did not provide users with a less personalised but equivalent version of the platforms.
Meta was fined and warned it could face daily penalties under the landmark Digital Markets Act (DMA) unless it complied with the law.
The company had started giving European users the possibility of seeing less personalised ads already in November last year. But this did not spare it the fine.
A commission spokesman declined to detail how the new offering improved on that but added that while the firm’s undertaking did not automatically close the case against it, it represented a “very good step forward” and “positive news” for EU consumers.
Brussels would now monitor its “effective implementation” and “seek feedback and evidence from Meta and other relevant stakeholders on the impact and uptake of this new ad model”.
Acknowledging the commission’s statement, Meta said: “Personalized ads are vital for Europe’s economy-last year, Meta’s ads were linked to EUR213 billion in economic activity and supported 1.44 million jobs across the EU.”
FILE PHOTO: SoftBank Group and Nvidia are in talks to invest in Skild AI, in a more than $1 billion funding round that could value the maker of foundation models for robots at around $14 billion.
| Photo Credit: Reuters
Japan’s SoftBank Group and Nvidia are in talks to invest in Skild AI, in a more than $1 billion funding round that could value the maker of foundation models for robots at around $14 billion, according to sources and a term sheet seen by Reuters.
If successful, the funding will be at nearly triple Skild’s valuation from the $4.7 billion it commanded in a $500 million Series B round earlier this year that saw participation from Nvidia, LG’s venture capital arm and Samsung, among others, according to PitchBook data.
Founded in 2023 by former Meta AI researchers and backed by Amazon.com and Lightspeed Venture Partners, Skild is trying to overcome a key hurdle that has slowed the broader deployment of general-purpose machines in factories and homes by developing universal software designed to serve as the brain for robots.
The company focuses on AI models for robots of all form factors rather than building any hardware of its own, and has said its technology uses vast data to teach robots perception and decision-making skills similar to those of humans.
The talks underscore surging investor interest in humanoid robotics firms as advances in artificial intelligence make such robots increasingly capable of performing complex tasks.
Still, experts caution that truly general-purpose robotic applications remain technically challenging and could still be years away from widespread adoption.
Skild AI and SoftBank did not immediately respond to a request for comment, while Nvidia declined to comment. The talks remain fluid and some details could change, a source said, adding that the deal is expected to close before Christmas.
SoftBank was impressed by Skild’s technology in pilot projects, a person familiar with the matter said, requesting anonymity as the matter was private.
Robotics is a key part of CEO Masayoshi Son’s plan for SoftBank. The company scooped up the robotics business of Swiss engineering group ABB in a $5.4 billion deal in October.
Commerce Secretary Howard Lutnick is pushing to accelerate robotics development through meetings with industry CEOs, as the Trump administration weighs an executive order on robotics next year, Politico reported last week.
Skild AI unveiled its first general-purpose AI model in July, saying the system can adapt to a wide range of environments and tasks from warehouse logistics to household chores.
The company raised $300 million at a $1.5 billion valuation as part of its Series A round last year, which saw investments from Jeff Bezos, SoftBank Group and Khosla Ventures among others.
Poco C85 5G has a 6.9 inch HD+ display with a 120 Hz adaptive refresh rate. It is IP64 rated for dust and water resistance.
Poco C85 5G holds a 6,000 mAh battery along with a 33W charger inside the box. It also supports 10W wired reverse charging.
Poco C85 5G runs on MediaTek Dimensity 6300 with up to 8 GB RAM and 128 GB storage. It also offers up to 16 GB virtual RAM. The budget segment phone operates on HyperOS 2.2 based on Android 15 out of the box. It will get 2 Android upgrades and 4 years of security updates.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Poco C85 5G sports a 50 MP rear camera, and an 8 MP selfie lens.
Poco C85 will sell exclusively on Flipkart starting December 16, at an introductory price of ₹10,999 for the 4 GB/128 GB variant, ₹11,999 for the 6 GB/128 GB variant, and ₹13,499 for the 8 GB/128 GB variant.
It comes in Mystic Purple, Spring Green, and Power Black.
Bekki Holzkamm has been trying to hire a lab technician at a hospital in rural North Dakota since late summer.
Not one U.S. citizen has applied.
West River Health Services in Hettinger, a town of about 1,000 residents in the southwestern part of the state, has four options, and none is good.
The hospital could fork over $100,000 for the Trump administration’s new H-1B visa fee and hire one of the more than 30 applicants from the Philippines or Nigeria. The fee is the equivalent of what some rural hospitals would pay two lab techs in a year, said Holzkamm, who is West River’s lab manager.
West River could ask the Department of Homeland Security to waive the fee. But it’s unclear how long the waiver process would take and if the government would grant one. The hospital could continue trying to recruit someone inside the U.S. for the job. Or, Holzkamm said, it could leave the position unfilled, adding to the workload of the current “skeleton crew.”
The U.S. health care system depends on foreign-born professionals to fill its ranks of doctors, nurses, technicians, and other health providers, particularly in chronically understaffed facilities in rural America.
But a new presidential proclamation aimed at the tech industry’s use of H-1B visas is making it harder for West River and other rural providers to hire those staffers.
“The health care industry wasn’t even considered. They’re going to be collateral damage, and to such an extreme degree that it was clearly not thought about at all,” said Eram Alam, a Harvard associate professor whose new book examines the history of foreign doctors in the U.S.
Elissa Taub, a Memphis, Tennessee-based attorney who assists hospitals with the H-1B application process, has been hearing concerns from her clients.
“It’s not like there’s a surplus of American physicians or nurses waiting in the wings to fill in those positions,” she said.
Until recently, West River and other employers paid up to $5,000 each time they applied to sponsor an H-1B worker. The visas are reserved for highly skilled foreign workers.
The new $100,000 fee — part of a September proclamation by President Donald Trump — applies to workers living outside the U.S. but not those who were already in the U.S. on a visa.
Subscribe to KFF Health News’ free Morning Briefing.
West River lab tech Kathrine Abelita is one of nine employees — six technicians and three nurses — at the hospital who are current or former H-1B visa holders. Abelita is from the Philippines and has worked at West River since 2018. She’s now a permanent U.S. resident.
“It’s going to be a big problem for rural health care,” she said of the new fee. She said most younger American workers want to live in urban areas.
Sixteen percent of registered nurses, 14% of physician assistants, and 14% of nurse practitioners and midwives who work in U.S. hospitals are immigrants, according to a 2023 government survey. Nearly a quarter of physicians in the U.S. went to medical school outside the U.S. or Canada, according to 2024 licensing data.
“A blanket exception for healthcare providers is the simplest path forward,” the National Rural Health Association and National Association of Rural Health Clinics wrote in a joint letter.
The proclamation allows fee exemptions for individuals, workers at specific companies, and those in entire industries when “in the national interest.” New guidance says the fee will be waived only in an “extraordinarily rare circumstance.” That includes showing that there is “no American worker” available for the position and that requiring a company to spend $100,000 would “significantly undermine” U.S. interests.
Taub called those standards “exceptionally high.”
Representatives of the NRHA and the American Medical Association, which organized a letter from the medical societies, said they’ve received no response after sending requests to Homeland Security Secretary Kristi Noem in late September and early October. The AHA declined to say whether it had heard back.
Homeland Security officials directed KFF Health News’ inquiries to the White House, which did not answer questions about individual waiver timelines or the possibility of a categorical exemption for the health care industry.
Instead, White House spokesperson Taylor Rogers sent a statement defending the new fee, saying it will “put American workers first.” Her comments echo Trump’s proclamation, which focuses on accusations that the tech industry is abusing the H-1B program by replacing American workers with lower-paid foreign ones. But the order applies to all trades.
Alam, the Harvard professor, said the U.S.’ reliance on international providers does raise legitimate concerns, such as about how it takes professionals away from lower-income countries facing even greater health concerns and staffing shortages than the U.S.
This decades-long dependency, she said, stems from population booms, medical schools’ historical exclusion of nonwhite men, and the “much, much cheaper” cost of importing providers trained abroad than expanding health education in the U.S.
Internationally trained doctors tend to work in rural and urban areas that are poor and underserved, according to a survey and research review.
Nearly 1,000 H-1B providers were employed in rural areas this year, the two rural health organizations wrote in their letter to the Trump administration.
J-1 visas, the most common type held by foreign doctors during their residencies and other postgraduate training in the U.S., require them to return to their home country for two years before applying for an H-1B.
But a government program called the Conrad 30 Waiver Program allows up to 1,500 J-1 holders a year to remain in the U.S. and apply for an H-1B in exchange for working for three years in a provider shortage area, which includes many rural communities.
Trump’s proclamation says employers that sponsor H-1B workers already inside the U.S., such as doctors with these waivers, won’t have to pay the six-figure fee, a nuance clarified in guidance released about a month later.
But employers will have to pay the new fee when hiring doctors and others who apply while living outside the U.S.
Alyson Kornele, CEO of West River Health Services, said most of the foreign nurses and lab techs it hires are outside the U.S. when they apply.
Ivan Mitchell, CEO of Great Plains Health in North Platte, Nebraska, said most of his hospital’s H-1B physicians were inside the U.S. on other visas when they applied. But he said physical therapists, nurses, and lab techs typically apply from abroad.
Holzkamm said it took five to eight months to hire H-1B applicants at her lab before the new fee was introduced.
Bekki Holzkamm, lab manager at West River Health Services in rural North Dakota, has been trying to hire a technician since late summer. But not one U.S. citizen has applied. West River will have to pay a new $100,000 H-1B visa fee — or request an exemption — if it wants to hire one of its foreign applicants.(Kathrine Abelita)
Bobby Mukkamala, a surgeon and the president of the American Medical Association, said Republican and Democratic lawmakers are concerned about the ramifications for rural health care.
They include Senate Majority Leader John Thune, who said he planned to reach out about possible exemptions.
“We want to make it easier, not harder, and less expensive, not more expensive, for people who need the workforce,” the Republican told KFF Health News in September.
Thune’s office did not respond to questions about whether the senator has heard from the administration regarding potential waivers for health workers.
The Trump administration is facing at least two lawsuits attempting to block the new fee. One group of plaintiffs includes a company that recruits foreign nurses and a union that represents medical graduates. Another lawsuit, by the U.S. Chamber of Commerce, mentions concerns about the physician shortage and health systems’ ability to afford the new fee.
Kornele said West River won’t be able to afford a $100,000 fee so it’s doubling down on local recruiting and retention.
But Holzkamm said she hasn’t been successful in finding lab techs from North Dakota colleges, even those who intern at the hospital. She said West River can’t compete with the salaries offered in bigger cities.
“It’s a bad cycle right now. We’re in a lot of trouble,” she said.
Phillip Reese is a data reporting specialist and an associate professor of journalism at California State University-Sacramento.
The Commission, which acts as the EU competition enforcer, said it was concerned that Google may be using publishers’ online content without compensating them adequately [File]
| Photo Credit: REUTERS
Alphabet’s Google faces an EU antitrust investigation into its use of web publishers’ online content and YouTube videos to train its artificial intelligence models, the European Commission said on Tuesday.
The Commission, which acts as the EU competition enforcer, said it was concerned that Google may be using publishers’ online content without compensating them adequately and without giving them the option to refuse the use of their content.
It expressed the same concerns regarding Google’s use of YouTube videos uploaded by its users.
“Google may be abusing its dominant position as a search engine to impose unfair trading conditions on publishers by using their online content to provide its own AI-powered services such as ‘AI Overviews’, which are AI-generated summaries,” EU antitrust chief Teresa Ribera told a conference.
“This case is once again a strong signal of our commitment to protecting the online press and other content creators, and to ensuring fair competition in emerging AI markets,” she said.
Last week, the European Commission launched an investigation into Meta’s plans to block AI rivals from its WhatsApp messaging system, underscoring increasing regulatory scrutiny into this area.
The U.S. tech giant risks a fine as much as 10% of its global annual revenue if found guilty of breaching EU antitrust rules.
David Garza sometimes feels as if he doesn’t have health insurance now that he pays so much to treat his Type 2 diabetes.
His monthly premium payment of $435 for family coverage is roughly the same as the insurance at his previous job. But the policy at his current job carries an annual deductible of $4,000, which he must pay out-of-pocket for his family’s care until he reaches that amount each year.
“Now everything is full price,” said the 53-year-old, who works at a warehouse just south of Dallas-Fort Worth. “That’s been a little bit of a struggle.”
To reduce his costs, Garza switched to a lower-cost diabetes medication, and he no longer wears a continuous glucose monitor to check his blood sugar. Since he started his job nearly two years ago, he said, his blood sugar levels have inched upward from an A1c of 7% or less, the target goal, to as high as 14% at his most recent doctor visit in November.
“My A1c is through the roof because I’m not on, technically, the right medication like before,” Garza said. “I’m having to take something that I can afford.”
Plans with high deductibles — the amount that patients must pay for most medical care before insurance starts pitching in — have become increasingly common. In 2024, half of private-industry employees participating in medical care plans were offered this type of insurance, up from 38% in 2015, according to federal data. Such plans are also offered through the Affordable Care Act marketplace.
With ACA marketplace premiums for next year increasing and many of the subsidies to help people pay for them poised to expire at year’s end, more people face tough choices as they weigh monthly premium costs against deductibles. To afford insurance at all, people may opt for a plan with low premium payments but with a high deductible, gambling that they won’t have any medical crises.
But high-deductible plans pose a particular challenge for those with chronic conditions, such as the 38 million Americans who live with Type 1 or Type 2 diabetes. Adults with diabetes who are involuntarily switched to a high-deductible plan, compared with adults on other types of insurance, face an 11% higher risk of being hospitalized with a heart attack, a 15% higher risk of hospitalization for a stroke, and more than double the likelihood that they’ll go blind or develop end-stage kidney disease, according to a study published in 2024.
“All of these complications are preventable,” said Rozalina McCoy, the study’s lead author.
Subscribe to KFF Health News’ free Morning Briefing.
Care vs. Cost
The initial rationale behind such high-deductible plans was to encourage people to become wiser health care shoppers, said McCoy, an associate professor of medicine at the University of Maryland School of Medicine in Baltimore. And they can be a good fit, proponents say, for people who don’t use a lot of medical care or who have cash on hand for a health crisis.
But while people with an excruciating earache will seek care, McCoy said, those with unhealthy blood sugar levels might not feel as urgent a need to seek treatment — despite the potential long-term damage — given the acute financial pain.
“You have no symptoms until it’s too late,” she said. “At that point, the damage is irreversible.”
Overall, medical care for people with diabetes costs insurers and patients an average of $12,022 annually to treat the disease, according to an analysis of 2022 data. Type 2 diabetes, the more common form, is diagnosed when the body can no longer process or produce enough insulin to adequately regulate blood sugars. With Type 1, the body can’t produce insulin. Those with the disease may end up on the financial hook not just for insulin and other types of medication but for related equipment.
Mallory Rogers, whose 6-year-old daughter, Adeline, has Type 1, calculates that it costs roughly $1,200 a month for insulin, a pump, and a continuous glucose monitor. That figure doesn’t include the cost of emergency supplies needed in case Adeline’s technology malfunctions. Those include another type of insulin, blood-testing strips, and a nasal spray that’s nearly $600 for a two-pack of vials — supplies that must be replaced once a year or more frequently.
“If she doesn’t have insulin, it would become an emergency situation within two hours,” said Rogers, a technology consultant who lives in Sanford, Florida. Rogers has been saving for the coming year when her daughter moves to the high-deductible health plan offered by Rogers’ employer, which has a $3,300 deductible for family coverage.
To treat her diabetes, Adeline relies on insulin, a pump, and a continuous glucose monitor that together cost about $1,200 a month, not including emergency supplies in case her technology malfunctions.(Mallory Rogers)
Taxing Decisions
Many insurance plans carry increasingly high deductibles. But to be defined as a high-deductible health plan — and thus be eligible to offer a health savings account — a plan’s deductible for 2026 must be at least $1,700 for an individual and $3,400 for a family, according to IRS rules.
Health savings accounts enable people to squirrel away money that can be rolled over from year to year to be used for eligible medical expenses, including prior to meeting a deductible. Such accounts, available through a plan or employer, can provide tax benefits. The contributions are limited to $4,400 individually and $8,750 for a family in 2026, and employers may contribute toward that total. Rogers’ employer pays $2,000 spread out over the year, and Garza’s contributes $1,200.
Rogers recognizes that she’s fortunate to have accumulated $7,000 so far in her health savings account to prepare for her daughter’s insurance shifting to Rogers’ plan.
“Adding a financial burden to an already very stressful medical condition, it hurts my heart,” she said, reflecting on those who can’t similarly stockpile. “Nobody asks to have diabetes, Type 1 or Type 2.”
When deductibles are too high, Huntley said, routine maintenance is what patients skimp on: “You don’t take the drug that you’re supposed to take to maintain your blood glucose. You ration your insulin, if that’s your scenario. You take pills every other day.”
Garza knows he should do more to control his blood sugar, but financial realities complicate the equation. His previous health plan covered a newer class of diabetes medication, called a GLP-1 agonist, for $25 a month. He wasn’t charged for his remaining medications, which included blood pressure and cholesterol drugs, or his continuous glucose monitor.
With his new insurance, he pays $125 monthly for insulin and several other medications. He doesn’t see his endocrinologist for checkups more than twice a year.
“He wants to see me every three months,” Garza said. “But I told him it’s not possible at $150 a pop.”
Plus, he typically needs lab testing before each visit, an additional $111.
In 2026, the deductible for a “silver”-level plan on the marketplace will average $5,304 without cost-sharing reductions, according to an analysis from KFF, a health think tank that includes KFF Health News. For a “bronze”-level plan, it will be $7,476. An annual visit and some preventive screenings, such as a mammogram, would be covered free of cost to the patient.
Moreover, people comparing plan options, whether through their employer or the marketplace, should figure out their annual out-of-pocket maximum, which still applies after the deductible is met, Huntley said.
Garza’s family policy requires him to pay 20% until he reaches $10,000, for example.
Given Garza’s high blood sugar levels, his doctor prescribed a fast-acting form of insulin to take as needed with meals, which costs an additional $79 monthly. He planned to fill it in December, when he’s responsible for only 20% of the cost after he has hit his deductible but not yet reached his out-of-pocket maximum.
Garza likes his job despite its health plan, saying he’s never missed a day of work, even recently when he had a stomach bug. As of late 2025, he remained conflicted about whether to sign up for health insurance when his company’s enrollment period rolls around in mid-2026.
He worries that dropping insurance would place his family too much at risk if a major medical crisis struck. Still, he pointed out, he could then use the money he now spends on monthly premiums to directly pay for care to better manage his diabetes.
When the Hungarian writer László Krasznahorkai said he could no longer speak about hope and turned instead to angels in his Nobel Prize lecture on December 7, he was really talking about mediation. The old angels brought speech from a transcendent ‘above’ and their very existence meant the world had a direction and a scale — towards something higher than us, guided by messages we could only receive. Krasznahorkai’s new angels, however, he said, have no discernible origin and no message to deliver. They move among us as a fragile, wounded people that can be destroyed by a word or a small humiliation. In his telling, they are sacrifices “because of us”.
Does this imagery sound familiar? It may well describe human lives in the age of artificial intelligence (AI), with people lost inside the systems they built as content moderators, clickworkers, data-labellers, gig workers — the people whose lives machines quietly profile and food into welfare algorithms predictive policing systems, their data scraped and sold off to train models that will then be used to govern them.
The “new angels” are not machines: they are the ones made expendable by the machines’ logic — even if At systems are unsettlingly close to a sort of counterfeit angelic form. They appear out of an abstract “cloud”, speak fluently in every register, and occupy the position of the messenger without offering messages of their own. Krasznahorkai’s angels are silent and demand a message from us that we no longer have. AI, however, will not shut up even as it hollows out our own capacity to speak from a grounded place. It simulates advice, empathy, knowledge, and even moral reasoning, but all as frightfully dull recombinations of text within a techno-economic stack that remains opaque. AI thus embodies precisely the loss he mourns: authority without responsibility, language without speech.
NVIDIA CEO Jensen Huang introduces an “Industrial AI Cloud” project during a press conference in Berlin, Germany, November 4, 2025.
| Photo Credit:
Reuters
Krasznahorkai’s lecture was really a long hymn to the dignity and exhaustion of the human species, and much of it landed very close to current debates about AI. He listed the astonishing run of human inventions — from art to philosophy, agriculture to science — before he turned to the present, where the same species has built devices to leave itself with only short-term memory. Is that not an accurate description of the attention economy AI has parachuted into and is now being used to supercharge? Developers and businesses are building AI models into feeds, search engines, advertising, and productivity tools, all to push us to have faster, more fragmented interactions.
But there is a second, more brutal layer. To train very large models you need massive, already existing stores of language. And Big Tech is treating what Krasznahorkai called the “noble and common possession of knowledge and beauty” as free raw material. The same civilisation that once struggled to create those works now builds systems that can cheaply imitate their surface forms while drawing value away from the institutions and labour that produce them. It is yet again “sacrifices because of us” — the cultural commons and its workers being consumed to fuel the appearance of infinite, effortless intelligence.
The U-Bahn scene in particular threw a peculiar light on AI governance. At an underground station in Berlin in the 1990s, Krasznahorkai recalled watching a homeless man painfully urinating on the platform’s ‘forbidden zone’ while a distant policeman rushed to punish him. The policeman on the platform was “the good sanctioned by all” the bearer of law and order; the sick man urinating on the tracks was cast as evil. Ten metres of trench separate them. In lived time, the policeman would probably have caught him, but Krasznahorkai froze the image: in reality, good never reaches evil; the distance is unbridgeable.
We confront the same bridge between our apparatus of ethics boards, principles, regulations, and “alignment” on one hand and the mess of institutionalised harm, with exploitative supply chains, surveillance, disinformation, and militarisation, on the other. As long as the architecture stays the same, Krasznahorkai’s diagnosis went, the design that makes some bodies visible and punishable and others invisible and protected, the chase will go on forever. The good of regulations runs only within a structure that guarantees its failure.
Sam Altman, co-founder and CEO of OpenAI, sits in the audience before a panel discussion on the future of artificial intelligence at TU Berlin, February 2025.
| Photo Credit:
Getty Images
Debates around AI often shrink to technofixes, e.g. better benchmarks, safer outputs, slightly stricter rules for deployment, and so on. Krasznahorkai’s lecture is however a refusal to treat symptoms in isolation. The tools we call ‘AI’ are emerging from a civilisation that treats attention as a resource to mine and the vulnerable as acceptable losses. If the new angels are sacrifices because of us, an AI politics worthy of his terms would have to be a politics that reduces the number of sacrifices altogether, i.e. which investigates where and how data are taken, who labours in the shadows, who bears the environmental and social costs, and which uses are simply off-limits no matter how profitable they are.
Yet there is a strong sense that nobody can simply exit the AI train. States feel compelled to invest lest they fall behind. Companies feel compelled to deploy code lest they lose advantage. Individuals feel compelled to adopt lest they lose work. The final sum is a sort of minimal ethic: maintain your capacity for attention, for naming sacrifices, even if you cannot yet see a path outside.
The ultimate question is not whether ‘art’ or ‘human creativity’ will survive AI but whether the civilisation that deployed AI still has the imagination and moral vocabulary to send any real message at all, as much to the angels it is sacrificing as to the tools it is unleashing in its own name.
Image used for representation purpose only.
| Photo Credit: Getty Images/iStockphoto
A government working paper released on Monday (December 8, 2025) suggested that AI large language models (LLMs) like ChatGPT should, by default, have access to content freely available online, and that publishers should not have an opt-out mechanism for such content. Instead, a copyright society-like non-profit should be set up to collect royalties for both members and non-members of that body.
The working paper, authored by a committee formed by the Department for Promotion of Industry and Internal Trade, is not final, and is accepting public comments for thirty days. The document is one of the main indicators of how the Indian government is thinking of balancing copyright holders’ fears that AI systems will regurgitate content they invested in without remuneration, and LLM developers who have routinely consumed massive amounts of data online to train their models.
Nasscom, which was represented in the DPIIT’s committee, dissented, arguing that forced royalties would amount to a “tax on innovation,” and said that “mining” or scraping the web for data must be allowed for freely available content without paywalls, and that both crawlable and access-restricted content providers should have options to “reserve” their content from being mined for LLM development.
No opt-out
The committee rejected Nasscom’s dissent, arguing that small content creators may not have the means to actually enforce such opt-outs.
The Digital News Publishers Association, which represents traditional news media outlets with a digital presence, including The Hindu, has sued ChatGPT maker OpenAI in the Delhi High Court for copyright infringement. OpenAI denies the allegations. The working paper argues that it may not be prudent to await the outcome of this and other similar litigation.
The recommendations, if put in place through a law, would essentially eliminate any allegations of improper access to data, by blessing all access provided a fee is paid. This model is similar to the “compulsory licensing” framework in place for radio stations in India, which are empowered to play music without negotiating rights for them, as long as a statutorily prescribed fee is paid to rightsholders.
This balancing may face pushback from both AI developers and content creators; while the latter may argue against anything that increases development costs — few AI firms are even profitable at the moment, leaving little appetite to share revenues — while content creators may resist a flat fee if they feel their inputs are far more valuable in training a model than other royalty recipients.
A payout to the copyright society that is set up for distributing AI riches to content creators would be distributed by giving weightage to factors like web traffic and social indicators, like how respectable a publisher is. Any decision would be appealable to the judiciary, the working group says.
The ChatGPT-maker noted that 75% of surveyed workers reported that AI use at work improved the speed or quality of their output. Meanwhile, the company reported that data science, engineering, and communications workers were saving about 60 to 80 minutes per day; this was more than the reported average.
“Workers who save more than 10 hours per week are not just using more intelligence, they are also using multiple models, engaging with more tools, and using AI across a wider range of tasks,” said the company.
OpenAI’s survey shared that 87% of IT workers reported faster IT issue resolution, 85% of marketing and product users reported faster campaign execution, 75% of HR professionals reported improved employee engagement, and that 73% of engineers reported faster code delivery.
The survey data had been taken from 9,000 workers across almost 100 enterprises, as well as real-world usage data from enterprise customers of OpenAI.
OpenAI noted that sectors reporting the largest benefits included accounting and finance, followed by analytics, communications, and engineering.
AI also helped workers tackle new challenges and improve their performance, according to OpenAI.
“Consistent with these findings, 75% of workers report being able to complete tasks they previously could not perform, including programming support and code review, spreadsheet analysis and automation, technical tool development and troubleshooting, and custom GPT or agent design,” said the company.
OpenAI reported that ChatGPT usage among business customers was growing all over the world, with the U.S., Germany, and Japan ranking as some of the most active markets by message volume.
Lava Play Max has a 6.72 inch FHD+ display with a 120 Hz refresh rate. The phone is IP54 rated for dust and splash resistance.
Lava has used a 5,000 mAh battery with 33W fast charging in Play Max.
Lava Play Max features MediaTek Dimensity 7300 processor with up to 8 GB LPDDR4X RAM and 128 GB UFS 3.1 storage. Virtual RAM up to 16 GB is also available along with expandable storage till 1 TB. It operates on clean Android 15 out of the box.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Lava Play Max comes with a 50 MP rear camera with EIS and an 8 MP selfie camera.
Lava Play Max comes in Deccan Black and Himalayan White starting at ₹12,999 for 6 GB/128 GB variant. The 8 GB/128 GB unit costs ₹14,999.