Home Blog Page 6

Apple iOS vulnerability chain exposes new attack pathway, researchers say

0

File photo of a man checking Apple’s iPhone 16 Pro in China

A newly identified set of iOS vulnerabilities is at the centre of a sophisticated attack method known as “DarkSword,” according to a new research by Google’s Threat Intelligence Group, along with cybersecurity firms Lookout and iVerify. Their findings show how multiple flaws in Apple’s mobile operating system can be linked together to quietly break through the iPhone’s security layers.

While this attack potentially affects iOS 18 version users, an extremely large pool, the research points out that it was actively used against iPhone users in four countries: Saudi Arabia, Turkey, Turkey, Malaysia and Ukraine.

At its core, DarkSword is what researchers describe as an exploit chain. In simple terms, that means attackers don’t rely on a single bug but instead combine several weaknesses, using one to unlock the next, until they gain deeper access to the device. This layered approach is what makes the technique both powerful and difficult to detect.

The research points to the use of previously unknown vulnerabilities, often called zero-days, which are security flaws that developers have not yet fixed because they are not publicly known. By chaining these together, attackers are able to move from limited access to full control of the system, including sensitive parts of the operating system that are normally locked down.

A watering hole attack

What stands out in this case is how the attack is delivered. Instead of requiring users to install malicious apps, the DarkSword chain can be triggered through compromised websites. Visiting one of these pages is enough to start the process in the background, with no obvious warning to the user. This method, sometimes referred to as a watering hole attack, works by infecting sites that targets are likely to visit.

Once the chain is successfully executed, the attackers can run what is essentially spyware. This allows them to extract data from the device, monitor activity, and access private information. In some cases, the malicious code does not remain on the phone after a reboot, which makes forensic analysis and detection much harder.

The report suggests that tools like DarkSword are no longer confined to highly targeted espionage operations. While such exploit chains were once associated mainly with government-backed actors, researchers are now seeing signs that similar capabilities are spreading more widely. This raises concerns about how quickly advanced techniques can move beyond niche use and into broader circulation.

Lower barriers to attack

Another notable aspect of the research is the indication that parts of the exploit framework may have been exposed online. If confirmed, that could lower the barrier for other groups to replicate or adapt the method, accelerating its use across different campaigns.

The findings underline a broader shift in mobile security. As smartphone defences have improved, attackers have responded by building more complex, multi-step intrusion methods. Each individual flaw might seem minor on its own, but when combined, they can undermine even tightly controlled systems.

Researchers say the vulnerabilities used in the DarkSword chain have since been addressed in newer iOS updates. Apple’s recent iOS 26.3.1 was released earlier this month. For users who cannot immediately update their devices, the researchers suggest enabling “Lockdown Mode,” which is a hardened security feature designed to reduce the attack surface by limiting certain functionalities that attackers often exploit. Even so, the episode highlights how critical timely updates remain, especially as attack techniques continue to evolve in both scale and sophistication.

Source link

You Can Have a Normal Weight and Still Be at Risk for Heart Failure

0

Research Highlights Fat stored around the waist, often called belly fat or visceral fat, showed a much stronger link to heart failure risk than body mass index (BMI), making waist size a more revealing measure of risk. Inflammation throughout the body emerged as a major factor connecting abdominal fat to heart failure, accounting for roughly […]

Source link

Scientists Uncover Aging Link That Could Change How Cancer Is Treated

0

A new study reveals how aging changes the biological behavior of lung cancer. Scientists at the University of Gothenburg have identified a protein that may increase the risk of lung cancer spreading and returning after treatment. Their findings suggest a possible path toward more targeted therapies, especially for older patients. Lung cancer is most common […]

Source link

“Harmless” Peptide May Actually Be Linked to Alzheimer’s Disease

0

Research from UC Santa Cruz indicates that the P3 peptide—an alternative cleavage product of the amyloid precursor protein—may play a role in Alzheimer’s disease. For many years, pharmaceutical companies have focused their Alzheimer’s drug development efforts on amyloid beta, a peptide known for forming sticky deposits in the brain. Billions of dollars and decades of […]

Source link

Meta moves Delhi HC against CCPA fine for walkie-talkie sale on Facebook

0

Meta argued that Facebook neither provides a mechanism for sale and purchase nor does it charge any commission from the users, as it is not an e-commerce platform.
| Photo Credit: Reuters

Meta on Wednesday (March 18, 2026) challenged in the Delhi High Court a Central Consumer Protection Authority order imposing a ₹10 lakh penalty on it for alleged unauthorised sale and listing of walkie-talkies on the Facebook Marketplace.

Meta’s counsel submitted that, unlike Amazon and Flipkart, Facebook was not an e-market but merely a “notice board”, and therefore, the Central Consumer Protection Authority (CCPA) has no jurisdiction over it.

The court posted Meta’s petition for hearing on March 25, asking it to explain how the order can be termed “without jurisdiction”. It also asked Meta why the National Consumer Disputes Redressal Commission cannot consider the issue.

Senior advocate Mukul Rohatgi, appearing for Meta, argued that Facebook neither provides a mechanism for sale and purchase nor does it charge any commission from the users, as it is not an e-commerce platform.

“This is a notice board meant only for Facebook users. We are not a shop. No commercial sales are allowed. No consideration is charged. We don’t charge anybody,” the senior counsel said.

The CCPA, in its order passed on January 1, 2026, held that Meta violated the Consumer Protection Act and its rules and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules by allegedly permitting walkie-talkie listings on Facebook Marketplace without mandatory disclosures.

Meta, however, claimed that the CCPA acted in excess of its jurisdiction by acting on the “untenable” premise that Facebook Marketplace was subject to and governed by the legal framework for e-commerce.

In its January 1 order, the CCPA also directed Meta to ensure that no walkie-talkies or any other product requiring statutory approval/certification is listed, hosted, advertised or sold on its platform without full compliance with applicable laws and mandatory disclosures.

It also asked Meta to periodically undertake a self-audit to check deceptive listings and publish a certificate of such self-audit on its website in the public and consumer interest.

Source link

Samsung Electronics plans to produce Tesla chips starting late 2027

0

Samsung ‌Electronics said on Wednesday ​that ⁠it expects to ‌start volume production ‌of ‌Tesla’s ⁠chips [File]
| Photo Credit: REUTERS

Source link

Trump administration defends Anthropic blacklisting in US court

0

The Pentagon separately designated Anthropic a supply chain risk under ‌a different law that could expand the order to the entire government [File]
| Photo Credit: REUTERS

The Trump administration said in a Tuesday court filing that the Pentagon’s ​blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence lab’s high-stakes lawsuit challenging ‌the decision.

U.S. Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI ​assistant Claude, a national security supply chain risk on March ⁠3 after the company refused to remove guardrails against its technology being used for autonomous weapons or domestic surveillance.

The Trump administration’s filing says Anthropic is unlikely to succeed on ‌its claims that the U.S. action violated speech protections under the U.S. Constitution’s First Amendment, asserting the dispute stems from contract negotiations ‌and national security concerns, not retaliation.

“It was only when Anthropic refused ‌to ⁠release the restrictions on the use of its products — which refusal ⁠is conduct, not protected speech — that the President directed all federal agencies to terminate their business relationships with Anthropic,” the administration’s legal filing said. The filing, from the U.S. Justice Department, said “no one ​has purported to restrict Anthropic’s expressive ‌activity.”

Anthropic’s lawsuit in California federal court asks a judge to block the Pentagon’s decision while the case plays out. Some legal experts say the company appears to have a strong case that the government overreached.

In a ‌statement, Anthropic said it was reviewing the government’s filing. The company ​said “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a ⁠necessary step to protect our business, our customers, and our partners.”

The White House did not immediately respond to a request for comment. U.S. President Donald Trump backed Hegseth’s ‌move, which excludes Anthropic from a limited set of military contracts but could damage the company’s reputation and cause billions of dollars in losses this year, according to its executives.

The designation came after months of negotiations between the Pentagon and Anthropic reached an impasse, prompting Trump and Hegseth to denounce the company and accuse it of endangering American lives with its usage restrictions.

Anthropic has ‌disputed those claims and said AI is not yet safe enough to be used in ​autonomous weapons. The company said it opposes domestic surveillance as a matter of principle. In its March 9 lawsuit, Anthropic said ⁠the “unprecedented and unlawful” designation violated its free speech and due process rights, while running afoul ⁠of a law requiring federal agencies to follow specific procedures when making decisions.

The Pentagon separately designated Anthropic a supply chain risk under ‌a different law that could expand the order to the entire government.

Anthropic is challenging that move in a second lawsuit in a Washington, D.C. ​appeals court.

Source link

A mystery AI model has developers buzzing: Is this DeepSeek’s latest blockbuster?

0

A powerful artificial intelligence model that appeared anonymously on a developer platform last week has sparked speculation that Chinese startup DeepSeek may be quietly testing its next-generation system ahead of an ​official launch.

The free model, called Hunter Alpha, surfaced on the AI gateway platform OpenRouter on March 11 without any ‌developer attribution and was later described by the platform as a “stealth model.”

During tests conducted by ​Reuters, the Hunter Alpha chatbot described itself as “a Chinese AI model primarily trained in Chinese” ⁠and said its training data extended to May 2025, the same knowledge cutoff point reported by DeepSeek’s own chatbot.

When asked about its creator, however, the system declined to identify its developer.

“I only know my name, my parameter scale and ‌my context window length,” the chatbot said.

Neither DeepSeek nor OpenRouter has identified the model’s creator and they did not respond to requests for comment.

Hunter Alpha’s profile page describes it as a 1-trillion-parameter ‌model, meaning it was trained using roughly one trillion adjustable values that determine how the system processes ‌language ⁠and generates responses. Models with more parameters generally require significantly more computing power to operate.

The system ⁠also advertises a context window of up to one million tokens, a measure of how much text an AI model can process or remember during a single interaction. A token roughly corresponds to a short piece of text, such as part of a ​word.

“The combination that stood out was Hunter ‌Alpha’s 1 million token context paired with reasoning capability and free access,” said Nabil Haouam, an engineer who builds AI agent systems.

“Most frontier models with that context window come with real cost at scale,” he added.

Those specifications resemble expectations in local media for DeepSeek’s next-generation V4 model, which Chinese outlets have reported could ‌launch as early as April. DeepSeek, like many of its Chinese competitors, is well-funded, though ​it has an unusual structure given its parent company is a quantitative hedge fund rather than a tech conglomerate.

While the overlap does not establish a direct connection, it has intensified ⁠speculation among developers that the anonymous system could be an early test version of the upcoming release by DeepSeek.

“The chain-of-thought pattern is probably the strongest signal,” said Daniel Dewhurst, an AI engineer who analysed the model after its release, ‌referring to how the AI model reasons.

“Reasoning style is hard to disguise and tends to reflect how a model was trained.”

Hunter Alpha’s scale and memory capacity also match specifications that have circulated for DeepSeek V4 since early this year, he said.

Still, some developers cautioned that the evidence linking the model to DeepSeek was inconclusive.

“My analysis suggests Hunter Alpha is likely not DeepSeek V4,” said Umur Ozkul, who runs independent AI benchmark tests, citing differences in token-related behaviour and architectural patterns when compared with DeepSeek’s existing systems.

He said speculation connecting the ‌model to DeepSeek was understandable given the timing and capabilities advertised.

Anonymous model launches are not unusual, as platforms like OpenRouter allow ​developers to send queries to dozens of AI models through a single interface, making them a popular testing ground for new systems.

An anonymous model called Pony Alpha appeared on OpenRouter in ⁠February before Chinese firm Zhipu AI confirmed it was part of its GLM-5 system five days later.

A notice ⁠on Hunter Alpha’s profile page said all prompts and completions for the model “are logged by the provider and may be used to improve the model,” underscoring the industry-wide practice of using stealth model ‌launches for unbiased feedback.

The model was adopted rapidly after appearing on the platform and processed more than 160 billion tokens as of Sunday, according to OpenRouter statistics.

Much of the activity came from software ​development tools and AI agent frameworks like OpenClaw, which allow AI systems to autonomously plan tasks and interact with external software.

Published – March 18, 2026 02:03 pm IST

Source link

Alibaba’s AI strategy shift comes into focus with big bets on agents

0

Alibaba is sharpening its ​artificial intelligence strategy by focusing on agents that connect the many businesses under its sprawling corporate umbrella.

In recent months, Alibaba has rolled out several AI ‌agent integrations and this week, the firm said it would separate its AI businesses from its cloud computing arm. ​The newly formed Alibaba Token Hub business group, led by Chief Executive Eddie Wu, is the clearest sign yet ⁠that the company is shifting its focus to digital assistants powered by AI models that consume far more tokens, units of data used by models to generate language, than traditional Q&A chatbots.

Alibaba did not respond to a request for comment on this story.

The $325 billion e-commerce giant reports quarterly results on Thursday, with AI ‌monetisation in focus as major tech firms in China and beyond wrestle with how to make the era-defining technology profitable. Analysts expect Alibaba’s third-quarter revenue to rise 3.8% and net income to fall 42.5%. The quarter included Singles’ Day, China’s ‌biggest shopping festival.

Facing a prolonged slump in consumer confidence as shoppers save rather than spend, a weak macroeconomic outlook and ‌a prolonged ⁠property crisis that has eroded household wealth, Alibaba has turned to new business models to encourage consumption.

Last year, the firm ⁠invested heavily in acquiring users for its instant retail platform, which competes in the one-hour delivery market with Meituan. This year, Alibaba’s AI chatbot Qwen has begun moving beyond answering questions to helping users make purchases directly through a chat interface.

In February, an early push to get users to try Qwen’s new functions encountered some hurdles. Alibaba ​launched the first phase of a 3 billion yuan ($435.7 million) ‌coupon campaign that allowed users to make in-app purchases on Alibaba-owned retail platforms using only chatbot prompts. The coupons proved too popular, prompting a temporary shutdown of the app.

According to Brian Wong, a former Alibaba employee and author of “The Tao of Alibaba,” the company’s wide-ranging ecosystem, spanning e-commerce, food delivery, travel, movie ticketing and more, means executing all those daily functions through a chatbot could fundamentally shift ‌consumer behaviour.

“Think of it like having OpenAI, Amazon, Stripe, Uber, DoorDash, Ticketmaster, Expedia, Netflix and Charles Schwab all integrated into ​one text box you can just use natural language to execute,” he said. “This is what the company has enabled through its restructuring and it’s happening first in China. I don’t see this happening in the U.S. because of ⁠the challenges of integrating different platforms from different companies.”

Alibaba is not the only Chinese tech giant using AI agents to integrate consumer-facing functions, but rivals like Tencent and TikTok-owner ByteDance would mainly serve as agent platforms interacting with third-party companies inside their apps. Alibaba’s ecosystem gives it an advantage, ‌said Ed Sander, an analyst at China Digital Retail Report.

“Alibaba also has the fulfillment and logistics part built in, not to mention running everything on Alibaba’s cloud infrastructure, no other company has the ability to execute every part from the chatbot all the way through to the logistics in the way Alibaba does,” he said.

On Tuesday, Alibaba launched another enterprise-focused AI platform targeting automation. The platform, called Wukong, can coordinate multiple AI agents to handle complex business tasks like document editing, spreadsheet updates, meeting transcription and research within a single interface.

A key driver behind the shift to agents is not only tapping into the frenzy triggered by the launch of OpenClaw in China, but also the potential ‌to make money from it. These agents, which can make decisions and execute tasks around the clock, consume tens to hundreds more tokens per day than a ​typical chat session, according to estimates from Poe Zhao, a China tech analyst and founder of Hello China Tech.

This matters especially for Chinese firms, most of which offer open-source AI models that are free to download and have seen ⁠token prices plunge amid intense domestic competition among leading tech companies.

Alibaba’s AI push comes as the company navigates turmoil in its AI leadership ranks. ⁠Lin Junyang, head of the firm’s Qwen model division, left in early March; the third senior Qwen executive to leave this year.

“This has heightened concerns about morale in Qwen and Alibaba’s ability to retain AI talent and maintain its leadership in the ‌AI model race,” Morningstar analyst Chelsey Tam said. “Top AI talent is scarce. If Lin and core Qwen members join a competitor, it would be a setback for Alibaba.”

“The AliCloud bench is deep and broad enough that while Lin’s departure was not ideal, there’s ​sufficient talent to fill in the gaps, particularly in light of the new restructuring that just took place,” Wong said.

Published – March 18, 2026 02:40 pm IST

Source link

Meta vowed to stop illegal financial ads in Britain. It failed 1,000 times in a week

0

U.S. tech giant Meta has repeatedly failed to stop illegal ads for high-risk financial products running on its platforms in Britain, despite committing to block them, according to a review by the country’s financial regulator.

Britain’s Financial Conduct ​Authority found that during one week in November, 1,052 ads for currency trading and certain complex financial instruments were posted on Meta’s platforms by advertisers not authorised by the regulator to promote them.

What’s more, 56% of those ads were from an unspecified number of unauthorised advertisers ‌the FCA had already flagged to Meta, according to the results of the review seen by Reuters and reported here for the first time.

Worldwide, billions of users of Meta’s platforms have been exposed ​to ads for fraudulent e-commerce and investment schemes, illegal online casinos and banned medical products, according to internal Meta documents previously reported by Reuters.

Britain’s FCA warned last year that people were increasingly being targeted on ⁠social media by online trading scams where fraudsters offer currency trades. Its review was an attempt to see how successful Meta has been at weeding out the rogue ads.

Asked about the FCA’s findings, Ryan Daniels, a spokesperson for Meta, said it fights fraud and scams aggressively on a global level and takes swift action on the vast majority of reports within days.

The regulator focused on Meta’s platforms, which include Facebook, Instagram and WhatsApp, because they carry a disproportionate amount of suspicious financial ads, a person familiar with the FCA’s work said.

“Fraud is the most ‌common crime in the UK,” an FCA spokesperson said. “With over half of some scams originating on their platforms, it’s vital Meta steps up and uses its tools to protect users from scam content.”

The regulator repeated its review of posts on Meta for another week in December. It again found that a small number of repeat offenders were responsible for the majority of the illegal ads it discovered, the person familiar with the ‌FCA’s work said, without giving a breakdown of the number of illegal ads or repeat offenders.

The person said that despite regular engagement with Meta over the issue of scam ads, the FCA has failed to see a material difference in ‌its ⁠approach and will continue to test the company’s controls and monitoring systems.

“Any suggestion that we ignore FCA reports misrepresents our ongoing efforts to protect people,” Meta’s Daniels said.

The company said further that advertisers running ⁠financial services ads in Britain were required to be authorised by the FCA and were responsible for complying with applicable law.

Britain’s Online Safety Act, which allows regulators to fine social media companies up to 10% of global revenue for running illegal user-generated content, started coming into force in March 2025. However, the provision giving them power to take action over scam ads which have been paid for has been delayed until at least 2027.

In the absence of legislation, Meta made a voluntary commitment back in 2022 to only allow firms authorised by the financial regulator to run financial services advertisements and updated ​its UK policy to reflect that commitment.

The FCA has no power to take action against Meta ‌itself, because it is regulated by communications watchdog Ofcom. When it comes to paid-for scam ads, Ofcom also remains powerless until the provision in the Online Safety Act comes into effect.

“We’re working at pace to implement this. The timeline has been affected by factors beyond our control, in particular a legal challenge against the government,” an Ofcom spokesperson said, adding that it had proposed social media companies use automated technology to detect and remove fraudulent content.

The FCA can take action against unauthorised advertisers for running financial ads on social media platforms, although many of them are outside Britain.

It issues alerts to consumers to avoid unauthorised firms, has charged and fined unauthorised influencers in Britain for promoting high-risk products on social ‌media and regularly asks social media platforms to take down illegal financial ads.

Britain’s National Crime Agency, meanwhile, has successfully taken down financial scam networks targeting Britons on social media platforms from countries such as Nigeria.

Fraud Minister ​David Hanson said he would continue to raise the issue of the need for tech firms to do more to tackle scams with Meta and other platforms until the fraudulent ad provision in the Online Safety Act comes into force.

“In the meantime … I expect them to go further and faster in standing up to this threat,” he told Reuters.

The FCA’s review was limited to ads for ⁠foreign exchange trading and contracts for difference (CFDs) because it has identified such products as being of particularly high risk of harming consumers, the person familiar with the FCA’s work said.

CFDs are complex derivative products used to speculate on price movements on a wide range of assets, including currencies. Because losses can far exceed initial investments, the FCA mandates strict protections for investors, such as requiring firms to disclose what proportion of their clients lost money.

Reuters was unable to determine the total number of currency and CFD ads posted ‌on Meta’s platforms during the weeks the FCA reviewed. Meta did not respond when asked for a weekly tally.

To test how effective Meta is at blocking potential scams under different regulatory regimes, a Reuters reporter created a suspicious investment promotion to run on Facebook teasing 10% returns a week.

Reuters tried to run the ad in Britain, where Meta doesn’t risk any financial penalty for running scam ads, and Australia, where it faces fines of up to A$50 million ($35 million) if it fails to detect scams under that country’s mandatory approach to financial advertiser verification.

During the ad verification process for both countries, Meta asked Reuters to declare if the ad was for financial services by ticking a box. To try to emulate scammers, it didn’t tick the box in either case.

The ad ran in Britain without further scrutiny. Reuters pulled the ad shortly after it was approved by Meta.

In Australia, even though Reuters hadn’t flagged the ad as being for financial services, Meta blocked it anyway and asked the news agency to prove it was authorised by Australia’s financial regulator to run ads for financial services.

Meta said in emailed comments that the ad posted by Reuters in Australia was caught because of enhancements in its process in that ‌country for financial services verification, without explaining what those enhancements were.

Meta said it was working to identify more effective safeguards that worked globally. It said it had increased the percentage of ad revenue globally coming from verified advertisers to 70% in 2025 from 55% at the end of 2024.

Martin Lewis, ​a consumer rights campaigner in Britain, said big tech companies needed to stop framing the fight against scam adverts as a technological problem.

“This is a financial problem. If you spend enough money, you can stop the scammers, and we need to change the economics so it is worth their while to spend the money to stop the scammers,” he told Reuters.

Reset Tech, a digital rights advocacy ⁠group, examined Meta’s ad library over a two-week period in July and August.

It looked for ads referencing three British banks (Barclays, HSBC, and Revolut) and then looked at which of those ads had three or more “red-flags”, such as offers of impossible returns, suspicious ⁠domains or fake endorsements.

Reset Tech found 51.1% of the 2,913 ads it identified were likely scams, such as suspected fraudulent investments schemes, credit offers or government support schemes. It estimated Meta could host 29,068 scam ads referencing the banks over a year, translating into 53.6 million cumulative exposures across Britain and the EU.

Reuters couldn’t independently verify Reset’s findings, which haven’t previously been reported.

Meta said Reset Tech’s report employed subjective and unreliable classification criteria to determine “suspected scams” and “suspicious ‌ads”, none of which the advocacy group could verify as being actual scams.

Meta said the report showed suspected scams had significantly lower reach than legitimate ads and that was proof its systems were successfully limiting the distribution of potentially violating content.

Barclays said a survey it commissioned last year of 2,000 people in Britain showed eight in 10 think tech firms should do more to stop scams. It said banks, social media platforms, tech firms and telecoms companies should work together ​to stop fraud.

Revolut said Meta’s platforms were the biggest source of authorised fraud reported to it. The bank said Meta must act urgently to improve the effectiveness of its verification systems and show its anti-scam initiatives were having a tangible impact.

Source link