Home Blog Page 8

The dark side of AI-powered toys | Explained 

0

Generative AI is now part of children’s toys. These novel AI-powered toy “companions” are available on popular e-commerce platforms. Their makers claim the toys help educate children. But experts warn that such toys could impact a child’s healthy development.

How do AI toys work?

AI toys require internet connectivity to work and can take on the form-factor of plush aliens, fluffy animals, or friendly-faced robots. Some robotic AI toys can move independently, while those in the form of stuffed animals are meant to be carried around by their owners.

Many of these AI toys come with embedded microphones that listen to children, to formulate replies. Their makers promote them as products that offer educational answers, give emotional support, guide children through tasks or games, teach them new skills, and return compliments.

Take, for example, Grem by Curio. Shaped like a cute, blue alien creature with large eyes, the website noted that the plush toy was voiced and designed by the singer Grimes. It does not require a subscription.

Meanwhile, many Amazon.com reviews for the ‘Miko 3 AI Robot for Kids’ toy focused on how children were fascinated by the robot, with one buyer noting that it could play hide-and-seek with kids, cheer them up, and serve as an intercom for parents. Others complained about the subscription price and poor battery life.

Multiple AI toy makers on Amazon claim that the models powering these interactions are sourced through reputed providers, such as OpenAI’s ChatGPT, and that there are safety measures in place to ensure they maintain children’s privacy and do not initiate inappropriate conversation topics.

These AI toys are starkly different from toys that make the same sounds over and over again when a button is pressed, or those that can only mimic children’s voices and offer a limited set of replies. AI toys are also expensive.

Why are experts warning parents about AI toys?

AI toys made headlines in late 2025 when the advocacy group US PIRG Education Fund reported that Singapore-based FoloToy’s Kumma bear for children encouraged sexual conversations, and also allegedly told users how to access dangerous objects.

The toy previously used OpenAI’s GPT 4o, but the AI company later suspended the developer and the company moved to another tech provider, according to the group.

Common Sense Media, a tech and media ratings organisation, also outlined the risks of AI toys in a report on January 22. AI toys or “companions” that look cute and soft on the outside but use voice-based chatbots to engage children come with “unacceptable risks,” per the organisation.

Common Sense Media cited risk factors that included unhealthy emotional attachments to AI toys, children’s extremely private data being collected, and cases of chatbots not working reliably. Curio’s Grem and Miko 3 were two of the toys it tested.

The organisation pointed out that children aged five or under cannot properly tell humans apart from AI, so such AI toys could harm how these children develop and recognise key relationships. Even older children between ages six and 12 who might understand that AI isn’t real could still use the toys as a replacement for healthy human connections, per the report.

The organisation listed a slew of concerning factors affecting not just children, but their families as well.

For example, AI toys that use subscriptions could harm children if they become dependent on the toys for emotional comfort while parents are unable to keep up with payments.

Furthermore, about 27% of AI toy outputs were inappropriate for children, as per CMC’s testing, and included content about self-harm, drugs, and unsafe roleplay, apart from going into mature topics and sharing risky advice.

“This happens because the underlying AI models are trained on adult internet content, and child-safety layers are added after the fact. These filters are imperfect, and content designed for adults or teens can leak through when children ask questions in unexpected ways,” noted the organisation.

Furthermore, Common Sense Media stressed the importance of giving children privacy during difficult moments and teaching them to cope with everyday frustrations in healthy ways, instead of distracting them with an always-cheerful toy.

The organisation reported that parental insight tools for these AI toys were largely inadequate, and that children’s private interactions were possibly being shared with third parties for further AI training purposes.

Due to hallucination, the AI toys could also provide incorrect responses and confuse their young users, per the report.

How should parents and caretakers treat AI toys?

Common Sense Media acknowledged that AI toys offered some benefits such as stimulating children’s learning, correcting potential bad behaviour, or telling customised stories to young users. The makers of these AI toys also point out that they offer a screen-free playtime experience for children, while encouraging learning and communication.

However, Common Sense Media recommended that parents engage with their children directly and guide them towards more traditional learning experiences, such as non-AI toys, books, museum visits, playdates, family game nights, imaginative play, and art.

“Human interaction is developmentally essential. No AI toy can replace the benefits of reading together with a parent, playing pretend with siblings, building with friends, or learning from teachers and caregivers,” stated Common Sense Media.

“These messy, complex human interactions are where real development happens. AI toys at best supplement these experiences and at worst replace them—and replacement is the greater risk,” it added.

Published – January 31, 2026 04:45 pm IST

Source link

Why does Elon Musk want to put AI data centers in space?

0

China also plans to create a “Space Cloud” by launching space-based artificial intelligence data centres over the next five years, state media reported on Thursday [File]
| Photo Credit: REUTERS

A proposed merger between Elon Musk’s SpaceX and xAI, reported exclusively by Reuters on Thursday, ​could give fresh momentum to Musk’s plan to launch satellite data centres into orbit ‌as he battles for supremacy in the rapidly escalating AI race against tech giants ​like Alphabet’s Google, Meta and OpenAI.

Space-based data centres, still an early-stage concept, would likely rely on hundreds of solar-powered satellites networked in orbit to handle the enormous computing demands of AI systems like xAI’s Grok or OpenAI’s ChatGPT, at a time when energy-hungry Earth-based facilities are becoming increasingly costly to run. Advocates say operating above the atmosphere offers nearly constant solar power and eliminates the cooling burdens that dominate ground-based data-centre ​costs, potentially making AI processing far more efficient.

But engineers and space specialists caution that commercial viability ⁠remains years away, citing major risks from space debris, defending hardware against cosmic radiation, limited options for in-person maintenance, and launch costs. Deutsche Bank expects the first small-scale orbital data-centre deployments in 2027–28 to test both the technology and the economics, with wider ​constellations — potentially scaling into the hundreds or thousands— ⁠emerging only in the 2030s if those early missions work.

SpaceX is the most successful rocket-maker in history and has successfully launched thousands of satellites into orbit as part of its Starlink internet service. If space-based AI computing is the future, SpaceX is ‌the most ideally placed to operate AI-ready satellite clusters or facilitate the setting up of on-orbit ‌computing. “It’s a no-brainer building solar-power data centres in space … the lowest-cost place to put AI will be space, and that will be true within two years, ‍three at the latest,” Musk said at the World Economic Forum in Davos earlier this month.

SpaceX is considering an initial public offering this year that could value the rocket and satellite company at over $1 trillion, Reuters ‍has reported. Part of the proceeds would go to funding the development of AI data centre satellites, sources say.

Jeff Bezos’ Blue Origin has been working on technology for AI data centres in space, building on the Amazon founder’s prediction that “giant gigawatt data centers” in orbit could beat the cost of their Earth-bound peers within 10 to 20 years by tapping uninterrupted solar power and radiating heat directly into space.

Nvidia-backed Starcloud has already offered a glimpse of that future: its Starcloud-1 satellite, launched on a Falcon 9 last month, carries an Nvidia H100 – the most powerful AI chip ever placed in orbit – and is ⁠training and running Google’s open-source Gemma model as a proof of concept. The company ultimately envisions a modular “hypercluster” of satellites providing about five gigawatts of computing power, comparable to several ​hyperscale data centres combined.

Google is pushing the space-based data centre idea with Project Suncatcher, a research effort ⁠to network solar-powered satellites equipped with its Tensor Processing Units into an orbital AI cloud. The company plans an initial prototype launch with partner Planet Labs around 2027.

China also plans to create a “Space Cloud” by launching space-based artificial intelligence data centres over the next five years, state media reported on Thursday. China’s main space contractor, China Aerospace Science and Technology Corporation, vowed to “construct gigawatt-class space ⁠digital-intelligence infrastructure,” according to a five-year development plan.

Source link

Apple acquires Israeli audio AI startup Q.ai

0

Apple on Thursday said it has acquired ‍Q.ai, an Israeli startup working on artificial intelligence technology ​for audio [File]
| Photo Credit: AP

Apple on Thursday said it has acquired ‍Q.ai, an Israeli startup working on artificial intelligence technology ​for audio. Apple did not disclose terms of ‌the deal for Q.ai, which was backed ​by venture capital firms Matter Venture Partners, Kleiner Perkins, Spark Capital, Exor and GV, formerly known as Google Ventures. The Financial Times reported it was worth nearly $2 billion, a figure Reuters could independently verify.

Apple did not say how it will use Q.ai’s ​technology but said the startup has worked on ⁠new applications of machine learning to help devices understand whispered speech and to enhance audio in challenging environments.

Q.ai last year filed ​a patent application ⁠to use “facial skin micromovements” to detect words mouthed or spoken, identify a person and assess their emotions, heart rate, respiration rate and other indicators.

Q.ai’s 100 employees, ‌including CEO Aviad Maizels and co-founders Yonatan Wexler ‌and Avi Barliya, will join Apple, the companies said.

Maizels founded three-dimensional sensing firm PrimeSense and ‍sold it to Apple in 2013. The PrimeSense deal eventually helped Apple move away from fingerprint sensors ‍on its iPhones and toward facial recognition technology.

In a statement, Maizels said, “Joining Apple opens extraordinary possibilities for pushing boundaries and realizing the full potential of what we’ve created, and we’re thrilled to bring these experiences to people everywhere.”

Apple has been putting new AI features into its AirPods earbuds, last year introducing technology ⁠that allows them to translate speech between languages.

Q.ai “is a remarkable company that is pioneering ​new and creative ways to use imaging and machine learning,” ⁠Johny Srouji, Apple’s senior vice president of hardware technologies, said in a statement. “We’re thrilled to acquire the company, with Aviad at the helm, and are even more excited for what’s to come.”

Source link

Video game stocks slide on Google’s AI model that turns prompts into playable worlds

0

The AI model, dubbed “Project Genie”, allows users to simulate a real-world environment through prompts with text or uploaded images [File]
| Photo Credit: REUTERS

Shares of videogame ‍companies fell sharply in afternoon trading on Friday after Alphabet’s ​Google rolled out its artificial intelligence model capable of creating ‌interactive digital worlds with simple prompts.

Shares ​of “Grand Theft Auto” maker Take-Two Interactive fell 10%, online gaming platform Roblox was down over 12%, while videogame engine maker Unity Software dropped 21%.

The AI model, dubbed “Project Genie”, allows users to simulate a real-world environment through prompts with text or uploaded images, potentially disrupting how video games ​have been made for over a decade and forcing ⁠developers to adapt to the fast-moving technology.

“Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as ​you move and interact with ⁠the world. It simulates physics and interactions for dynamic worlds,” Google said in a blog post on Thursday.

Traditionally, most videogames are built inside a game engine such ‌as Epic Games’ “Unreal Engine” or the “Unity Engine”, which handles ‌complex processes like in-game gravity, lighting, sound, and object or character physics.

“We’ll see a real ‍transformation in development and output once AI-based design starts creating experiences that are uniquely its own, rather than just accelerating ‍traditional workflows,” said Joost van Dreunen, games professor at NYU’s Stern School of Business.

Project Genie also has the potential to shorten lengthy development cycles and reduce costs, as some premium titles take around five to seven years and hundreds of millions of dollars to create.

Videogame developers have been increasingly adopting artificial intelligence as a way to stand out ⁠in a highly competitive industry dominated by large players. A Google study last year showed that nearly ​90% of game developers use AI agents.

However, the use of ⁠AI in videogames is a contentious topic, with many fearing that the technology could lead to a wave of job losses, after the industry went through record layoffs over the past few years as it ⁠recovered from a post-pandemic slump.

Source link

Google India profit stays almost flat at ₹1,437 crores in FY25

0

Google India registered a flat standalone profit of ₹1,436.9 crore in the financial year ended March 2025 due to lower revenue and higher employee and tax expenses, according to a regulatory document shared by market intelligence firm Tofler.

The company logged a profit after tax of around ₹1,425 crore in the preceding financial year.

When contacted, a Google India spokesperson said that the financial numbers of 2025 were not comparable with those of 2024.

“Profit of Rs 1,425 crore for 2024 includes profit from the IT division. The IT division was demerged into a separate company (Google IT Services), so the 2025 GIPL profit numbers do not reflect IT division profit,” the spokesperson said.

The spokesperson further said that the net revenue for 2024 includes adjustment (addition) of ₹229 crore, which pertains to the revenue of fiscal years 2016-17 to 2022-23 but reflected in 2024 based on the BAPA agreement signed with the Indian government.

A note in the company’s financial report said the company entered into a Bilateral Advance Pricing Agreement (BAPA) with the Central Board of Direct Taxes (CBDT) under Section 92CC of the Income Tax Act, 1961, covering transactions pertaining to the purchase of advertisement space and enterprise products from Google Asia Pacific (GAP) in March 2024.

Under the terms of the BAPA, the company has agreed to the arm’s length price in relation to the purchase of advertisement space and enterprise products from GAP for financial years 2016-17 to 2024-25.

“Pursuant to the BAPA in relation to the aforesaid transactions, the company has recognised the additional income of Rs 2,297 for 2016-17 to 2022-23 under the head revenue from operations (net) in its financial statement for the year ended March 31, 2024, and the additional income of Rs 2,297 has been offered to tax by the company.

The revenue from operations of Google India declined by 3.2% to ₹5,340 crore during the reported fiscal from ₹5,518 crore in FY24.

Total revenue of Google India increased by 3.2% to ₹6,116 crore from ₹5,921 crore a year ago, an account of “other income” of around Rs 776 crore.

According to the analysis done by Tofler, the net margin of Google India also declined to 23.49% from 24.06% a year ago.

“The company’s total expenses for the fiscal year were reported as Rs 4,136 crore,” Tofler said.

The company posted a 7.8% increase in employee benefit expense to about ₹2,146 crore in FY25 from ₹1,989 crore in the year-ago period.

The total tax expense of Google India during the reported fiscal increased by 22.6% to around ₹543 crore in FY25 from over ₹442 crore in FY24.

Published – January 31, 2026 12:42 pm IST

Source link

LinkedIn co-founder urges tech leaders to denounce Trump

0

Hoffman is the latest in a string of Silicon Valley figures to criticise the US president [File]
| Photo Credit: REUTERS



Source link

US judge signals Elon Musk’s xAI may lose lawsuit accusing Altman’s OpenAI of stealing trade secrets

0

Lawyers for xAI and OpenAI did not immediately respond to requests for comment [File]
| Photo Credit: REUTERS

A U.S. federal judge signaled on ‌Friday she may dismiss a lawsuit by Elon ​Musk’s artificial intelligence startup xAI accusing Sam Altman’s rival OpenAI of stealing trade secrets to gain an unfair advantage in developing AI technology.

U.S. District Judge Rita Lin in San Francisco said her “tentative view” is to grant OpenAI’s motion to dismiss xAI’s lawsuit, pending oral arguments on February 3. ​She also said tentatively that xAI could amend its ⁠claims if she dismissed its case.

Lawyers for xAI and OpenAI did not immediately respond to requests for comment. Musk’s startup sued OpenAI in September, accusing it ​of hiring xAI employees away ⁠to obtain confidential information related to the AI chatbot Grok.

OpenAI, known for its ChatGPT chatbot, countered by accusing Musk of conducting a “campaign to harass a competitor with unfounded legal ‌claims” because xAI could not keep up with ChatGPT.

In ‌a four-page filing outlining her thoughts, Lin said Musk’s startup did not plausibly allege that OpenAI ‍acquired or encouraged the theft of trade secrets, despite allegations that some former xAI employees downloaded source code before leaving.

Lin also ‍said it was not plausible to infer from xAI’s complaint that OpenAI used xAI’s trade secrets, or the former xAI employees used them on the job after joining OpenAI.

The judge may also dismiss an unfair competition claim, saying xAI’s poaching allegations “all focus on poaching in service of acquiring xAI’s trade secrets and do not identify any other reason why the ⁠hiring of those employees was anticompetitive.”

Lin asked xAI and OpenAI to address her tentative reasoning at the hearing. The ​lawsuit is part of a broader legal battle between Musk ⁠and OpenAI, which he co-founded and is also suing over its conversion to a for-profit company. Musk, the world’s richest person, is seeking as much as $134.5 billion in damages from OpenAI and Microsoft in that case. Jury selection ⁠is scheduled for April 27.

Source link

Apple to prioritise premium iPhone launches in 2026 amid memory crunch: Report

0

Reuters could not immediately verify the report [File]
| Photo Credit: REUTERS

Apple is prioritising production and ‍shipment of its three highest-end iPhone models for ​2026 while delaying the rollout of ‌its standard model due to a ​marketing strategy shift and supply-chain constraints, Nikkei Asia reported on Friday, citing four people with knowledge of the matter.

Reuters could not immediately verify the report. Apple did not immediately respond to a Reuters’ request for ​comment outside regular business hours.

The U.S. tech ⁠giant will focus on delivering its first-ever foldable iPhone and two non-folding models with upgraded cameras and larger ​displays for a ⁠flagship launch in the second half of 2026, while the standard iPhone 18 is now slated to ship in the first half ‌of 2027, the report said.

The ‌move is aimed at optimizing resources and maximizing revenue and profits from ‍premium devices amid rising cost of memory chips and materials, and to minimize production risks tied ‍to the more complex industrial techniques for Apple’s first foldable device, according to the report.

“Supply chain smoothness is one of the key challenges for this year, and the marketing strategy change also played a part in the decision (to prioritize premium models),” an ⁠executive at an iPhone supplier with direct knowledge of the plan told Nikkei Asia.

Apple ​on Thursday beat Wall Street estimates for ⁠quarterly revenue, driven by strong iPhone demand and a sharp rebound in China, with CEO Tim Cook telling Reuters that demand for the latest handsets was “staggering.”

Source link

Open-source AI models vulnerable to criminal misuse, researchers warn

0

Hackers and other criminals can easily commandeer computers operating ​open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating ‌security risks and vulnerabilities, researchers said on Thursday.

Hackers could target the computers running ​the LLMs and direct them to carry out spam operations, phishing content creation or disinformation campaigns, evading platform security protocols, the researchers said.

The research, carried out jointly by cybersecurity companies SentinelOne and Censys over the course of 293 days and shared exclusively with Reuters, offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments. These include hacking, hate speech and harassment, violent or gore content, personal data theft, scams or fraud, and in some cases child sexual ​abuse material, the researchers said.

While thousands of open-source LLM variants exist, a significant portion of the ⁠LLMs on the internet-accessible hosts are variants of Meta’s Llama, Google DeepMind’s Gemma, and others, according to the researchers. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed.

AI industry conversations about security controls ​are “ignoring this kind of surplus capacity ⁠that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. Guerrero-Saade likened the situation to an “iceberg” that is not being properly accounted for across the industry and open-source community.

The research analysed publicly accessible deployments of open-source LLMs deployed through Ollama, a ‌tool that allows people and organisations to run their own versions of various large-language models.

The researchers were able to see system prompts, which are the instructions that dictate how ‍the model behaves, in roughly a quarter of the LLMs they observed. Of those, they determined that 7.5% could potentially enable harmful activity.

Roughly 30% of the hosts observed by the researchers are operating out of ‍China, and about 20% in the U.S.

Rachel Adams, the CEO and founder of the Global Center on AI Governance, said in an email that once open models are released, responsibility for what happens next becomes shared across the ecosystem, including the originating labs.

“Labs are not responsible for every downstream misuse (which are hard to anticipate), but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance, particularly given uneven global enforcement capacity,” Adams said.

A spokesperson for Meta declined to respond to questions about developers’ responsibilities for addressing concerns around downstream abuse of open-source models and how concerns might ⁠be reported, but noted the company’s Llama Protection tools for Llama developers, and the company’s Meta Llama Responsible Use Guide.

Microsoft AI Red Team Lead Ram Shankar Siva Kumar ​said in an email that Microsoft believes open-source models “play an important role” in a variety of areas, but, “at the ⁠same time, we are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards.”

Microsoft performs pre-release evaluations, including processes to assess “risks for internet-exposed, self-hosted, and tool-calling scenarios, where misuse can be high,” he said. The company also monitors for emerging threats and misuse patterns. “Ultimately, responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.”

Ollama did ⁠not respond to a request for comment. Alphabet’s Google and Anthropic did not respond to questions.

Published – January 31, 2026 10:44 am IST

Source link

Nvidia’s plan to invest up to $100 billion in OpenAI has stalled: Report

0

Nvidia CEO Jensen Huang has privately emphasised to industry associates in recent months that the original $100 billion agreement was ‌non-binding and not finalised, the report said [File]
| Photo Credit: REUTERS

Nvidia’s plan to invest ‍up to $100 billion in OpenAI to help it train and run ​its latest artificial-intelligence models has stalled after some ‌inside the chip giant expressed doubts about the ​deal, the Wall Street Journal reported on Friday.

The chipmaker in September announced plans to invest up to $100 billion in OpenAI in a deal that would have given the ChatGPT maker the cash and access it needs to buy advanced chips that are key to ​maintaining its dominance in an increasingly competitive landscape.

The Journal, ⁠citing people familiar with the matter, said the companies are rethinking the future of their partnership, and the latest discussions include an equity ​investment of tens of ⁠billions of dollars as part of OpenAI’s current funding round.

Nvidia CEO Jensen Huang has privately emphasised to industry associates in recent months that the original $100 billion agreement was ‌non-binding and not finalised, the report said.

Huang ‌has also privately criticised what he has described as a lack of discipline in OpenAI’s ‍business approach and expressed concern about the competition it faces from the likes of Alphabet’s Google and Anthropic, the WSJ added.

“We ‍have been OpenAI’s preferred partner for the last 10 years. We look forward to continuing to work together,” an Nvidia spokesperson said in an emailed statement to Reuters.

OpenAI did not immediately respond to Reuters’ request for comment.

Big Tech companies and investors such as SoftBank Group Corp are racing to forge partnerships with OpenAI, which is spending ⁠heavily on data centres, betting closer ties with the startup would give them a competitive edge ​in the AI race.

Amazon is in talks to invest dozens of ⁠billions in OpenAI and the figure could be as high as $50 billion, Reuters reported on Thursday.

OpenAI is looking to raise up to $100 billion in funding, valuing it at about $830 billion, Reuters has previously reported.

Source link