Home Blog Page 3

OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say

0

OpenAI is unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it ‍has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom.

The ChatGPT-maker’s shift in strategy, the details ​of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the ‌process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in ​chips for training large AI models, while inference has become a new front in the competition.

This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia’s AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips.

The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia’s. But its shifting product road map also has changed the kind of computational resources it requires and bogged down ​talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying ⁠the idea was “nonsense” and that Nvidia planned a huge investment in OpenAI.

“Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale,” Nvidia said in a statement.

A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference.

After the ​Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a ⁠post on X that Nvidia makes “the best AI chips in the world” and that OpenAI hoped to remain a “gigantic customer for a very long time”.

Seven sources said that OpenAI is not satisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI’s inference computing needs in the future, ‌one of the sources told Reuters.

The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. ‌But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI’s talks, one of the sources told Reuters. Nvidia’s decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI ‍industry, chip industry executives said. Nvidia, in a statement, said that Groq’s intellectual property was highly complementary to Nvidia’s product roadmap.

Nvidia’s graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for ‍inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI’s efforts.

The ChatGPT-maker’s search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users.

Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot.

Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex’s weakness to Nvidia’s GPU-based hardware, one source said.

In a January 30 call with reporters, Altman said that customers using OpenAI’s coding models will “put a big ⁠premium on speed for coding work.”

One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users.

Competing products such as Anthropic’s Claude and Google’s Gemini benefit from deployments that rely more ​heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI ⁠chips like the Nvidia-designed GPUs.

As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment.

Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment.

But by December, Nvidia moved to license Groq’s tech in a non-exclusive all-cash deal, the sources said. Although the deal would ⁠allow other companies to license Groq’s technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq’s chip designers.

Published – February 03, 2026 10:00 am IST

Source link

Crypto market volatility triggers $2.5 billion in bitcoin liquidations

0

The wipeouts ​in both short and long bitcoin positions are far below the record $19 billion in crypto liquidations [File]
| Photo Credit: REUTERS

Bitcoin investors liquidated $2.56 billion in recent days, according ​to data provider CoinGlass, as cryptocurrencies slumped following a sell-off ‌in other risk assets, including equities and precious metals.

The wipeouts ​in both short and long bitcoin positions are far below the record $19 billion in crypto liquidations the market experienced after U.S. President Donald Trump announced new tariffs on China. Even so, analysts say the fresh cascade of wipeouts demonstrates how sensitive the crypto market has become to risk-off sentiment.

While bitcoin is notoriously volatile, cryptocurrencies have been weighed down ​by fresh concerns about the AI trade and a sell-off ⁠in precious metals sparked by Trump’s announcement that he was picking Kevin Warsh as his Fed chair nominee.

“What we’ve seen the last few months is probably people taking ​a step back while they ⁠have to reassess their risk frameworks and how they operate in this market,” said Adam McCarthy, a senior research analyst at digital market data provider Kaiko.

Bitcoin fell as low as $104,782.88 during the October 10-11 period, ‌after setting a fresh record high just days earlier above $126,000.

It ‌has yet to regain those peaks, and was last trading at around $78,396, after falling more than 6% on Saturday. ‍Thin weekend liquidity also exacerbated downward moves over the weekend, Bitfinex analysts said in a Monday research report.

“The biggest risk to prices at these ‍levels have been outside forces — whether including a sharp rise in unemployment or deterioration of the AI trade,” said Jim Ferraioli, director of crypto research and strategy at Charles Schwab’s Schwab Center for Financial Research.

Markets encountered a barrage of news last week that weighed heavily on investor sentiment, including disappointing Microsoft earnings that raised concerns about AI spending. Microsoft on Wednesday reported revenue growth in its Azure cloud-computing business that was only slightly ⁠above expectations, sending shares down 10% the following day.

Markets also expect Warsh to lead a shift toward rate cuts alongside ​tighter balance-sheet policy, which is seen as leaning more hawkish.

That announcement sparked a ⁠sharp sell-off in gold and silver prices on Friday, with silver recording its worst day ever and gold notching its steepest daily fall since 1983.

“Investors were looking for an excuse to lighten up and they finally got several,” said David Morrison, ⁠senior market analyst at Trade Nation

Source link

Elon Musk’s SpaceX acquires xAI

0

A 3D-printed miniature model of Elon Musk and xAI logo.
| Photo Credit: Reuters

Elon Musk’s space firm SpaceX said on ‍Monday (February 2, 2026) it has acquired his artificial ​intelligence startup xAI, combining the rocket-and-satellite ‌company with the ​maker of the Grok chatbot in a move aimed at unifying Mr. Musk’s AI and space ambitions.

A merger would represent one of the most high-profit corporate pairings in ​Silicon Valley, blending a space-and-defence contractor ⁠with a rapidly evolving AI developer whose costs are dominated by chips, data ​centres and energy.

The ⁠deal illustrates Mr. Musk’s push to fuse his fast-growing AI efforts with his aerospace and satellite-internet empire, betting ‌that shared computing, data ‌and engineering talent can accelerate both AI development and potentially ‍support longer-term ambitions around space-based data centres. SpaceX and the AI startup ‍were in discussions to merge ahead of a blockbuster public offering planned for later this year, Reuters had reported on Thursday (January 29, 2026), to bring Mr. Musk’s rockets, Starlink satellites, the X social media platform ⁠and Grok AI chatbot under one roof.

The combined company is ​expected to price shares at about $527 ⁠each, and would have a valuation of $1.25 trillion, Bloomberg News had reported earlier in the day.

Source link

No More Jet Lag: New Oral Compound Helps “Reset” the Body’s Internal Clock

0

A new oral compound can reset the circadian clock independent of timing, dramatically speeding recovery from jet lag in animal models. Most people have felt it: that foggy, out-of-sync sensation after a late-night flight, an all-nighter, or a sudden switch to overnight work. It happens because the body runs on a built-in 24-hour timekeeper, the […]

Source link

Nitrate in Drinking Water May Raise Dementia Risk, Study Warns

0

New research has found that people who consume higher levels of nitrate from vegetables have a lower risk of developing dementia, while those who get more nitrate and nitrite from animal-based foods, processed meats, and drinking water face a higher risk of dementia. New findings from Edith Cowan University (ECU) and the Danish Cancer Research […]

Source link

The Brain May Be Wired for Drinking Before the First Sip

0

Alcohol exposure before birth may quietly set the brain on a path toward risky drinking decades later. A new study published today (February 2) in JNeurosci examines how exposure to alcohol and stress before birth can influence brain function and drinking behavior later in life. Led by Mary Schneider and Alexander Converse at the University […]

Source link

Sleep Deprivation Triggers a Strange Brain Cleanup

0

When you don’t sleep enough, your brain may clean itself at the exact moment you need it to think. Most people recognize the sensation. After a night of inadequate sleep, staying focused becomes harder than usual. Thinking feels slower, attention wanders, and simple tasks take more effort than they should. Researchers at MIT have now […]

Source link

First tranche of research innovation fund to be spent by March

0

Image used for representational purposes only.
| Photo Credit: Getty Images/iStockphoto

The Department of Science and Technology will spend its first tranche of ₹3,000 crore — out of the ₹1 lakh crore corpus of the Research Development and Innovation scheme — by March this year, Abhay Karandikar, Secretary, Ministry of Science and Technology, said on Monday (February 2, 2026).

The scheme anticipates investing in high-risk, high-impact research and the strengthening of linkages between laboratories, start-ups, and industry. It was unveiled in February 2025. Although allotted ₹20,000 crore for the Financial Year 2025-26, the Department of Science and Technology has not been able to spend any of that corpus until January. The February 1 Union Budget allocation for the Ministry of Science and Technology, however, has a ₹20,000 crore allocation for FY 2026-27.

“The Research, Development, and Innovation fund will not directly invest in corporations or startups. It will invest through second-level fund managers, including alternate investment funds, development finance institutions. 193 such fund managers have applied, and we will be shortlisting and selecting out of it,” Mr. Karandikar said.

“Currently, only two statutory bodies — the Technology Development Board (under the Department of Science and Technology) and the Biotechnology Research and Innovation Council (under the Department of Biotechnology) have been appointed as fund managers (via nomination). That is the reason we couldn’t spend the ₹20,000 crore. The ₹1 lakh crore needs to be deployed over seven years. We will spend ₹3,000 crore by March 31, 2026,” he added.

Mr. Singh said that the provisions of the Budget had poised India to be a “manufacturing” economy. The ₹10,000 crore Biopharma Shakti mission, over five years, will be spread among multiple Ministries to develop biological materials that would create new jobs and spur progress in fields as varied as drug development and carbon capture.

Source link

Snowflake and OpenAI make a $200 million bid to corner corporate data intelligence market

0

Snowflake made its fortune by acting as the ultimate data warehouse for companies. By pioneering the separation of cloud storage from computing power, it allowed organisations to dump vast lakes of corporate information—from customer logs to supply chain metrics—into the cloud, organising it into neat, queryable rows. It was a lucrative business model that culminated in the largest software IPO in history in 2020. Yet, in the age of generative artificial intelligence, being a passive reservoir is no longer enough. Data must not only sit; it must speak, reason, and act.

This imperative explains the logic behind the announcement on Monday (February 2, 2026) that Snowflake has entered a $200 million, multi-year partnership with OpenAI. The deal, which integrates OpenAI’s most advanced models directly into Snowflake’s data infrastructure, represents a significant tactical shift for both firms, signalling that the battle for enterprise AI has moved from the chatbox to the database.

To understand the stakes, one must look at Snowflake’s current predicament. The company faces fierce competition from Databricks, a rival that has historically been stronger in the complex data science required for AI, and the “hyperscalers”—Amazon, Microsoft, and Google—who own the underlying infrastructure. Snowflake’s nightmare is “data egress,” where customers extract their data from Snowflake’s storage to feed it into AI models hosted elsewhere.

Closer to data

By embedding OpenAI’s technology, including the touted GPT-5.2 model, directly into its “Cortex AI” layer, Snowflake is attempting to invert the business model of the industry. Instead of moving heavy data to the models, they are bringing the models to the heavy data.

For Snowflake, the implications are existential and financial. The company is effectively turning its platform into an operating system for the enterprise. By enabling “AI Agents”—software entities capable of performing multi-step tasks like analysing sales data and drafting emails—Snowflake hopes to increase the consumption of its “credits” (its unit of pricing).

If a CFO can query the database in plain English to forecast quarterly earnings, the compute-heavy inference runs on Snowflake’s metre. It transforms the company from a storage facility into an intelligence factory, justifying its premium valuation in a market that has grown skeptical of software-as-a-service growth rates.

A bypass to enterprise AI

For OpenAI, the calculus is equally strategic. While ChatGPT captured the consumer imagination, the long-term profitability of the San Francisco-based lab relies on deep integration into the corporate backend. Partnering with Snowflake offers a bypass around the formidable “cold start” problem of enterprise AI: the lack of accessible, structured data.

Snowflake’s 12,600 customers, including giants like Canva, already have their most pristine data governed within Snowflake’s walls. This deal hands OpenAI a direct line to the proprietary information of the Fortune 500 without the friction of complex integration, cementing its models as the default cognitive engine of the corporate world.

The benefits for enterprises are, at first glance, compelling. The primary allure is the reduction of “data gravity” friction. CIOs have long been wary of sending sensitive proprietary data via API to third-party model providers due to security and latency concerns. This partnership ostensibly solves that by keeping the data within Snowflake’s “governed” perimeter.

The promise of “Snowflake Intelligence”—an agentic layer that allows employees to converse with their organisation’s entire knowledge base—could theoretically democratise data analysis, removing the bottleneck of needing SQL-proficient data scientists to answer basic business questions. It offers a cleaner, more secure architecture for deploying AI than the patchwork of vendors most companies currently struggle with.

Beware of sticker shocks

However, corporate buyers should temper their enthusiasm with caution. The most immediate concern is the tightening of vendor lock-in. Snowflake has long been criticised for its high costs; adding compute-intensive AI agents to the bill could lead to sticker shock. By building agents that rely specifically on OpenAI’s proprietary architecture within Snowflake’s environment, companies may find it technically and contractually difficult to switch to open-source alternatives or rival models in the future.

Furthermore, there is the question of reliability. The announcement emphasises “governance” and “trust,” yet large language models are notoriously prone to hallucinations. deploying “AI agents” that can take action—not just retrieve information—adds a layer of operational risk. If a Snowflake-hosted agent misinterprets a schema and generates a flawed financial report, or triggers an erroneous supply order, the “tangible return on investment” promised by the press release could quickly turn into a liability.

This partnership represents a consolidation of the AI stack. Snowflake and OpenAI are betting that in the future, the distinction between the database that remembers and the AI that thinks will dissolve. For the enterprise, the convenience of this union is undeniable; the price of admission, however, will be total commitment to their combined ecosystem.

Published – February 02, 2026 07:31 pm IST

Source link

Samsung to launch Galaxy F70e 5G on February 9 in India

0

Samsung to launch Galaxy F70e 5G on February 9 in India
| Photo Credit: Special Arrangement

Samsung on Monday (February 2, 2026) announced the launch of Galaxy F70e 5G on February 9 in India. This will be first phone under the F70 series that is focused towards young and Gen Z buyers.

Galaxy F70e 5G features a dual rear-camera system having a 50 MP main camera and a secondary depth camera. It will get an 8 MP front camera for selfies.

Galaxy F70e 5G will have a 120 Hz refresh rate display and in a 8.2mm profile.

Galaxy F70e 5G will ship with a 6,000 mAh battery paired with 25W fast-charging support.

Galaxy F70e 5G will come in Limelight Green and Spotlight Blue shades, with a leather finish at the back.

Source link