Home Blog Page 3

Scientists Finally Solve the Mystery Behind Rare COVID Vaccine Blood Clots

0

A rare but serious clotting disorder linked to certain COVID-19 vaccines and natural adenovirus infections has puzzled scientists for years. An international team of researchers from McMaster University (Canada), Flinders University (Australia), and Universitätsmedizin Greifswald (Germany) has identified why a very small number of people developed serious blood clots after receiving certain COVID-19 vaccines or […]

Source link

How OpenAI’s ChatGPT helped scientists crack a tedious physics problem

0

A team of physicists from various universities has teamed up with the artificial intelligence (AI) model GPT-5.2 to arrive at a new result in theoretical physics, OpenAI announced on February 13.

While the result itself is obscure, although valuable to physicists working on the topic, the methods that the team and the model used to arrive at the result are turning heads.

Problem statement

Imagine you’re trying to predict what happens when particles crash into each other. In particle physics, scientists calculate these predictions using something called scattering amplitudes — essentially formulae that spit out the probability of different outcomes when particles collide.

Now, the traditional way to calculate these probabilities involves drawing lots of little diagrams called Feynman diagrams, which show all the possible ways the particles can interact. There are different types of diagrams but the new work focused on the simplest kind, called tree diagrams. These branch out like actual trees: particles come in, meet at the vertices where they interact, and go out, but the paths never loop back on themselves.

Even though tree diagrams are the simplest type of Feynman diagram, as you add more particles to your collision, the number of different tree diagrams you need to draw and calculate grows terribly fast. For just a handful of particles, you might need to calculate thousands or millions of tree diagrams and add them all up. It can be exhausting.

But here’s the thing: when physicists finally finish all that work and add everything up, they often find the answer is surprisingly simple, like a messy equation with a million terms somehow canceling down to just a few. This finding was actually quite shocking when physicists first arrived at it in the 1980s. It was a sign that they’re probably doing things the hard way and there could be a clever shortcut they hadn’t found yet.

The new paper focused on a type of particle collisions involving gluons. Gluons are particles that act like glue holding the quarks together inside protons and neutrons. They’re the carriers of the strong force, which is one of nature’s four fundamental forces. When gluons interact with each other or with quarks, physicists need to calculate the scattering amplitudes to predict what will happen.

Gluons have a property called helicity, akin to the direction of their spin. Think of it like whether a football is spiraling clockwise or counterclockwise as it flies through the air. Physicists label these helicity states with plus or minus signs: a gluon can have positive helicity (spinning one way) or negative helicity (spinning the opposite way). When they’re calculating the scattering amplitudes for gluon collisions, they need to keep track of which gluons have which helicity.

For a long time, physicists believed certain combinations of spinning gluons would have zero amplitude, meaning these collisions can’t happen. Specifically, if you had one gluon spinning one way (call it minus) and all the others spinning the opposite way (plus), the standard reasoning suggested this configuration was forbidden.

AI’s help

The new work has however found that this isn’t quite right. The single-minus tree amplitudes, where one gluon is minus and all the rest are plus, can actually happen in certain special conditions. The particles need to be arranged in what the authors have called a half-collinear configuration — all the particles moving nearly in the same direction, like arrows pointing along the same line. The effort eventually revealed a simple formula for these previously impossible tree-level amplitudes.

According to the study’s authors, GPT-5.2 Pro first suggested the formula, and another AI model — an internal one that OpenAI built for this purpose — proved it to be correct. The human physicists then verified it was right by checking if it satisfied all sorts of mathematical consistency rules that any proper physics formula must obey.

The ‘humans’ also provided explicit formulae for the same calculations when they involved three, four, and five gluons, when the formulae are relatively manageable. But when they got to six gluons, the formula using the old method already had 32 separate terms — a drastic increase in the complexity even for such a small number of particles. The new formula on the other hand was a product of n – 2 factors, where n is the number of particles.

“It happens frequently in this part of physics that expressions for some physical observables, calculated using textbook methods, look terribly complicated, but turn out to be very simple,” Institute for Advanced Study physics professor Nima Arkani-Hamed said in a release. “This is important because often simple formulae send us on a journey towards uncovering and understanding deep new structures, opening up new worlds of ideas where, amongst other things, the simplicity seen in the starting point is made obvious.”

The preprint paper of the work was uploaded to the arXiv repository on February 12.

“To me, ‘finding a simple formula has always been fiddly, and also something that I have long felt might be automatable by computers. It looks like across a number of domains we are beginning to see this happen; the example in this paper seems especially well-suited to exploit the power of modern AI tools,” Arkani-Hamed added.

Making mistakes

If the new finding represents AI at its best in physics research, generating genuine insights that humans can rigorously verify, its success also raises a question: how reliably can AI contribute to theoretical physics? Because other recent episodes suggest the answer is more complicated than the new work alone might suggest.

On November 19, 2025, Stephen Hsu, a theoretical physicist at Michigan State University, uploaded a paper that he said had been accepted for publication by the journal Physics Letters B; it was published in January 2026. In the paper, Hsu reported that large language models (LLMs) like GPT-5 could contribute to cutting-edge physics research instead of just helping physicists.

He described a real research project where he used AI models in two roles — to generate new ideas and calculations and to check the work for errors — a bid to reduce the model’s tendency to produce plausible-sounding but incorrect results. Thus, he reported, GPT-5 independently proposed a novel research direction, applying the Tomonaga-Schwinger formalism to study modifications of quantum mechanics, then helped derive complex equations to that end.

Hsu emphasised in the paper text that while the model could manipulate sophisticated physics concepts and even suggest new research paths, it still made everything from simple calculation mistakes to more dangerous conceptual errors, leading Hsu to say: “Research with an LLM might be compared to collaboration with a brilliant but unreliable human genius who is capable of deep insights but also of errors both simple and profound.”

When Hsu announced the paper on X.com on December 1, it was retweeted among others by OpenAI president Greg Brokman.

‘A cautionary tale’

A week on, IIT-Mandi theoretical physicist Nirmalya Kajuri published a post on his blog noting that one of the approaches AI adopted in the paper “has been dead” since 1994, when Charles Torre and Madhavan Varadarajan proved it “simply does not work”. The result implied that “the starting point of this paper … is not well defined to begin with,” Kajuri added. Around the same time, University College London physicist Jonathan Oppenheim wrote that the question Hsu’s paper addressed had been answered 35 years ago by the physicists Nicolas Gisin and Joseph Polchinski.

In Oppenheim’s view, the AI lacked the wisdom to recognise that it was trodding settled territory and to stop and ask what new insight it could contribute.

Oppenheim also found upon closer inspection that the AI’s mathematical criteria didn’t actually test what it claimed to. Specifically, it caught problems with non-local modifications, which physicists already knew were problematic, but missed some real issues with non-linear modifications. In other words, the AI answered the wrong question while making it look correct. Thus, he warned, this is what AI-generated “slop” looks like: papers with apparently correct maths and sophisticated formalism that pass peer review but don’t actually advance knowledge.

“I’m pretty confident that Steve published this as an example of what an AI could do, rather than as an example of interesting physics,” Oppenheim wrote. “Which is what makes this a cautionary tale.”

Looping forward

On February 4, he reported a different sort of effort, again to have an AI model, in this case Anthropic AI’s Claude, to perform research-level physics. Oppenheim had had his student Muhammad Sajjad spend a week working out a particular calculation involving path integrals with unusual features that differed from standard quantum field theory. When Oppenheim had Claude Opus 4.5 work on the same problem, it was done in five minutes but arrived at the wrong answer.

Interestingly, when he asked Claude to verify its work using Mathematica code, it went through multiple iterations of checking and correcting itself until its calculation matched the Mathematica output perfectly. The problem was that Claude had fed Mathematica the wrong expression to begin with, so it confidently converged on the incorrect answer.

Oppenheim then developed an unusual teaching method: he used Claude Code’s ‘skill file’ system to teach the AI to learn from its mistakes. (The skill file allows users to create persistent instructions that load automatically when the user mentions specific topics.) Then, after each teaching session, he would completely wipe Claude’s memory and ask it to perform the calculation fresh.

Over several iterations of what he called the “Groundhog Day loop” — referring to the 1993 Hollywood film whose protagonist lives the same day over and over and eventually finds love — the skill file accumulated the lessons it needed to finding the correct answer to the problem, including breaking calculations into steps, offloading work to symbolic maths software rather than trying to calculate by hand, spawning multiple agents to verify results, and so on. And because each instance of Claude started from a clean memory, it didn’t remember its predecessors’ failures.

Finally, Oppenheim reported one instance of Claude got the calculation right in five minutes, finally matching what had taken Muhammad Sajjad a week of meticulous work, while also not tripping itself up.

Flood of papers

As Kajuri wrote in his post, “AI has entered its graduate student arc. With careful prompting, it can work through computations and come up with useful ideas. But like most grad students, it still has some way to go before becoming a matured researcher. If you ask it to solve a nontrivial problem, it will give you slop. But with supervision and scrutiny, it can produce impressive results.”

“Right now, it almost certainly can’t write the whole research paper (at least if you want it to be correct and good), but it can help you get unstuck if you otherwise know what you’re doing, which you might call a sweet spot,” University of Texas theoretical computer scientist Scott Aaronson wrote after enlisting the help of GPT-5 for a problem he’d had in September 2025. “Who knows how long this state of affairs will last?”

That diagnosis being said, AI is being integrated with the scientific enterprise right now in many ways, in some enterprises more wholesale than in others. Perhaps the most visible way right now is by unscrupulous scientists using AI to generate bad papers — as Oppenheim and others have warned — and further reduce the already sagging average quality of the research literature in order to further their own careers.

Peer-reviewers for some journals have also adopted AI themselves. Review work is voluntary yet labour-intensive and time-consuming, and many reviewers have taken the help of models, against journals’ advice to not, for a range of tasks. But even there, scientists recently told The Hindu, it’s important to have humans in the loop to evaluate “conceptual novelty and significance” and provide “constructive feedback that advances science”, among others.

mukunth.v@thehindu.co.in

Source link

There are two ways to build skills using AI tools—Opt for this method

0

The rapid integration of artificial intelligence into the professional landscape has created a paradoxical promise: the ability to do more while knowing less. As tools like large language models become ubiquitous in fields ranging from software engineering to data analysis, a fundamental question emerges regarding the long-term cost of our new-found efficiency.

A recent study from researchers at Anthropic, titled ‘How AI Impacts Skill Formation,’ provides a rigorous look into this dilemma, revealing that the way we interact with these tools creates two distinct paths for professional development based on how one uses AI tools.

The researchers studied a group of coders, dividing them into two groups—one with access to AI tools and another without—to complete a coding challenge. At the end of a 35-minute-long coding challenge, all participants were asked to take a test to check their python programming proficiency.

Upon evaluation, the team found those in the control group scored higher than those in the treatment pool, suggesting a stark divide between high-scoring and low-scoring interaction patterns. It shows that while AI can accelerate the completion of a task, it can simultaneously decelerate the mind if used as a substitute rather than a supplement—an idea bordering on my earlier column on building careers in the age of AI.

The treatment group path, identified as the low-scoring interaction pattern, is characterised by what researchers call cognitive offloading. In this scenario, the user treats the AI as a primary agent of execution rather than a collaborator. When faced with a complex task—such as learning a new programming library—the low-scoring participant focuses almost exclusively on the output.

They delegate the heavy lifting of code generation and debugging to the AI, moving through the assignment with deceptive speed. This group often finishes tasks fast, yet their comprehension of the underlying mechanics remains remarkably shallow. Not just that, the researchers also pointed out that many in this group tended to spend more time interacting with the AI assistant, which could’ve ideally been utilised to learn a new skill.

By bypassing the iterative, often frustrating process of trial and error, they inadvertently skip the very neurological “struggle” required for deep learning. For these individuals, the AI tool serves as a high-tech crutch; they reach the finish line, but their internal “muscle memory” for the skill is never built, resulting in quiz scores that plummet when the tool is removed.

This contrasts sharply with the high-scoring group whose philosophical approach to AI was fundamentally different. They didn’t see AI as a replacement for their own logic but as a peer or senior.

Instead of asking the AI to “write the code,” they asked conceptual questions. They sought explanations for why a particular function is used or request that the AI break down a generated snippet into its component parts. This group demonstrated a high level of cognitive engagement, maintaining an active mental model of the task at hand.

While they might take longer to complete a project than the pure delegators, their retention is significantly higher. By using AI to clarify concepts and validate their own reasoning, they successfully converted AI’s data into personal knowledge. For the high-scorer, the AI is a catalyst for mastery, not a shortcut around it.

The study’s findings suggest that the primary differentiator between these two paths is not the amount of manual labour performed, but the degree of mental involvement. Interestingly, the research noted that even when participants manually re-typed code instead of copy-pasting it, their learning did not necessarily improve if they weren’t mentally processing the “why” behind the syntax.

This highlights a critical trap in the modern workplace: the illusion of competence. It is possible to be highly productive in the short term by following the low-scoring path of delegation, but this leads to a hollowing out of expertise. In an era where AI can handle routine execution, the value of a human professional increasingly lies in their ability to supervise, architect, and troubleshoot—skills that are only developed through the high-scoring path of engaged learning.

The choice between the two ways of building skills rests on how we value our own expertise. The low-scoring path offers the siren song of immediate results and “vibe coding,” where one can produce functional work without a deep grasp of the foundations. The high-scoring path requires more discipline, demanding that we slow down to ask “how” and “why” even when a solution is just a prompt away. To thrive in an AI-augmented world, we must resist the urge to offload our thinking. By choosing the path of high-engagement, we ensure that as the tools around us get smarter, we are getting smarter alongside them.

Published – February 14, 2026 08:00 am IST

Source link

Nothing’s first flagship store retail in India goes live in Bengaluru

0

Nothing’s first flagship retail store in India goes live in Bengaluru
| Photo Credit: Haider Ali Khan

Nothing on Saturday (February 14, 2026) opened its flagship retail store in India, starting with Bengaluru. Nothing’s CEO Carl Pei and co-founder and India President, Akis Evangelidis inaugurated this store in Indiranagar.

The London-based consumer technology company will sell Nothing and CMF products alongside official merchandise, including apparel, at the store.

Nothing’s Bengaluru store will be their second retail store worldwide as the first one based at Soho in London. The tech unicorn also plans two new flagship retail stores in New York City and Japan.

Nothing wants to grow its retail presence in India like Apple and aims to reach out to more offline buyers who prefer touch-and-feel experience.

Nothing’s Bengaluru store is about 5,032 square feet, and hosts a range of experiential and service-led features, including a dedicated studio space for creators to shoot unboxing and hands-on content, customised Nothing products available exclusively at this location, and interactive elements such as vending machines, claw games, and conveyor-belt product displays.

“Opening our first flagship store is a major milestone for Nothing, cementing our position as one of the fastest-growing smartphone brands in India. More than just a point of transaction, we intentionally didn’t want to build a conventional retail store. Instead, we designed this space to off er our customers a unique, immersive experience. Our goal is to build understanding, trust, and lasting relationships with our community. The Indian market highly values hands-on engagement and design-led thinking, and this store will be the platform where we invite curiosity, clearly tell our brand story, and cultivate a hub for future launches, collaborations, and community-focused experiences,” said Akis Evangelidis, co-founder and India President, Nothing.

Source link

Major opportunities for AI in jobs and governance, says MeitY Secretary S. Krishnan

0

Ahead of the India AI Impact Summit 2026 in New Delhi, the Secretary for the Ministry of Electronics and Information Technology (MeitY) in a wide-ranging conversation at The Hindu MIND event moderated by Aroon Deep discussed artificial intelligence (AI), India’s semiconductor ambitions, and MeitY’s role in digital governance.

We are less than a week away from the India AI Impact Summit, which will witness participation of representatives from dozens of countries. Could you give us a quick rundown on where we are on AI from an Indian perspective?


We’ve taken an approach where we will try to provide the three aspects of infrastructure that AI needs: compute, datasets, and models. With government support, access to these is made a little easier. Then the focus is on seeing what we can do with the applications and solutions that people are able to develop using these resources.

Ultimately, there are two things that are important. One, that firms’ revenues will depend on how they deploy AI. Deployment is important and that’s what also delivers impact. In the Indian context, there are many areas where you can use AI to enhance productivity, efficiency, and effectiveness. Our start-ups can do well and these are things that we can offer also as products to the rest of the world.

We necessarily have to do it a little frugally given the kind of resources we have, which is again a reason why this model that we have adopted appeals to many countries in the poorer parts [of the world]. In a number of indicators relating to AI, which institutions like Stanford University and others measure, we seem to be doing relatively well. On the Vibrancy Index, we ranked third, on skill penetration and use of AI for enterprise solutions, we ranked second overall.

So, if you look at this kind of penetration and the kind of skilling, clearly we seem to have some advantage there that we need to build on. NITI Aayog has done a study that shows that yes, undoubtedly we lose, or some jobs in the regular coding programming side of IT/ITeS (Information Technology-enabled Services) will go away, but we can create many more jobs in terms of what else can happen.

What I see really as the big other opportunity is that there are many areas, including governance, where all of us would like to see substantial enhancement in quality and that probably is something that AI can offer. At the same time, we are aware of risks, dangers, and possible harms, which is why I think that when there is a need to regulate, we stand ready to regulate.


What does regulation look like practically?


If you have seen the report chaired by the Principal Scientific Adviser on AI Governance Guidelines, what it also states is try and use existing laws as much as possible. If you take, for example, what we can do with the existing Information Technology Act, that’s one aspect of it. The other part is what we need to do in the copyright space. So, that is being dealt with in a particular way. Another part is how other data, including personal data, get used. So, the Digital Personal Data Protection Act, 2023 sort of fits in there.

Some of this regulation is already in place. Some of it requires tweaking, tightening, and that is what we keep attempting to do, including the new set of rules we put out [amending the IT Rules, 2021 to require labelling of synthetically generated content].


Those rules introduce labelling for AI-generated content and reduce takedown timelines for all content from 24-36 hours to two to three hours.


Labelling is in terms of a right to know. We all have a right to know if what we’re seeing is artificially generated. It’s a very minor requirement and technologically fairly easy to solve. There were certain issues that I think in the course of consultation [from October 2025 onwards, when the draft of these rules was published] they [stakeholders] did raise with us and we have addressed them. For instance, we exempted smartphone camera auto-enhancements. Likewise for special effects in films.

The change in time limits is fundamentally based on our understanding that there are two factors involved. When initially these time frames were imposed, they were much longer because the nature and kind of intermediaries we were dealing with those days were different and they had more time to respond.

The possible virality of a lot of these things is very quick. All the damage is done within a matter of 24 or 36 hours. Practically, our own experience has been that whenever any such takedowns have been required, most companies did not need more than an hour or two to comply.


On electronics manufacturing, how prepared are we in an era of weaponised supply chains?


Some of the story lies in the past, some of it in the future. We did produce electronics even up to the late 1990s. A lot of it went out after the Information Technology-I agreement of 1997 [which allowed IT hardware to be imported at minimum duties]. I am not for a moment saying that that was necessarily bad.

I think the IT revolution may not have taken place if you did not have access to computers and laptops and various other tools on the scale that we did, thanks to opening up. Now, you have reached a stage where I think it is important to also have that capacity at home domestically. We recognise that it is a global value chain, so it is not as if every part of it will be in India, but you have to have a reasonably substantial part of it to make sure that the value chain deepens.

So, we start in a sense at the end of the finished product [such as smartphones] because that gives you scale and employment. Value addition in the country is just about 18-20% because companies mostly import components. However, this is changing with schemes like the Electronics Component Manufacturing Scheme encouraging technology transfer similar to how China learned from the Apple ecosystem. This scheme is expected to significantly increase value addition to 35-40%, which is comparable to China’s 40-50%. Semiconductors are more strategic and less about value; it is about what we are capable of doing. There is a Tamil saying, ‘Veralukketha veekam [Don’t bite off more than you can chew]’. So, the question is, ‘How do you chew what you can bite off and manage?’ The India Semiconductor Mission is designed on the basis of what we can actually chew. We are not at the leading edge. But we are in those segments where there is still considerable volume of consumption and will be there for the foreseeable future. The support has to be extended over at least a decade or so, which is why the India Semiconductor Mission 2.0 was also announced in the Union Budget. So, we should move forward and organically, then grow sort of into the more leading edges.


There are reports of the compliance timeline for the Digital Personal Data Protection Act, 2023 reducing from 18 to 12 months. Why?


We have not shortened it. We have initiated a consultation with the industry. We received feedback that the 18-month period is a little too long and that there are various elements that firms are already ready to comply with. So, can we actually talk to the industry and see if we can reduce that time frame? So, that is a context in which we are speaking to the industry.


Varghese K. George: An international commentator likened the situation on AI now to what our awareness of COVID-19 was in February 2020. So, everybody was seeing some distant virus in China and then three weeks later everything in the world turned on its head. So, the comment being that that moment in terms of AI has already arrived. So, what is our understanding of where global AI research stands?


While much is said about agentic AI taking over, our view is that its practical utility remains uncertain. We believe that focusing on smaller, specialised AI tools – like sector-specific, vision, quantitative models, and smaller language models – offers more immediate, practical relevance and greater benefit to society and humanity. The agentic vision may transpire, but it is still far off.

Jacob Koshy: How are IT firms discussing the AI wave? Their business model is built on a labour arbitrage that is now being threatened by this technology.


We have had conversations with many of the people in the IT industry. They say many of the coding and programming jobs are difficult to sustain because those can be done by an AI bot. But when you have to create an application, or create a solution, then you need to have better domain expertise, like in agriculture or manufacturing. The deployment of the application takes human resources. You have to understand which are the data sets you have to bring in, how you tailor those to suit a particular situation, how you adjust the way that the orchestration levels work, and multiple deployment-related tasks that need to be done. Their understanding is that they would still have multiple job opportunities. But that would require many of their present employees to get retrained and understand this differently. We have this programme called Future Skills Prime, which is primarily designed around reskilling and retraining people. In colleges, the emphasis has been to teach this as a horizontal technology; we need to teach it across every course.

Suhasini Haidar: Two questions – are we looking to create an international body for AI ethics and safety? And on MeitY’s cyber law division: it is meant to stop unlawful speech and yet we see again and again people who are in government putting out AI videos inciting violence. Where do you think MeitY’s responsibility really lies?


This is the first time a country in the Global South is hosting the AI summit. So, in a sense, yes, India could possibly be a natural leader in some of the aspects of AI, not necessarily in AI governance or regulation – that is one part of it – but more in terms of even offering more affordable technologies and more affordable deployments. Hopefully, in the final declaration, something will come out. Now, whether there will be another international body like the Solar Alliance, I don’t really know. We may not do it as a regular body – we are also part of the Global Digital Compact of the UN and so on. So, we will work with the international community to see how this progresses. The number of cases where the government blocks information online is actually a fraction – it’s less than 0.1% of the total number of cases that social media entities actually take down as part of their community guidelines and so on. So, it is very small, but we have to act when things come up through this channel and we act on what material is brought before us.

G. Sampath: AI is a power-intensive sector with water and electricity needs. How are we looking at this from our climate commitments?


India has one of the largest grids in the world with high levels of renewable energy and load capacity. One of the issues with renewable power is often there is no consumption at the time when it gets generated because the loads are inadequate and a lot of it just gets back down. So, there is an understanding that there could be surplus power that could be used for this purpose. There are both air-cooled servers and water-cooled servers, and there are ways in which this can also be economised.

But we are fairly clear that there is nothing in terms of a relaxation that is given from any of the environmental norms or any of the other norms for a data centre. The only set of norms that have been relaxed are building norms; data centres don’t need much parking, etc. To that limited extent, it is a relaxation.

But in terms of water and electricity consumption, they will have to meet all the relevant norms, subject to availability, subject to what needs to be done. Many of these decisions ultimately are taken at the State government level. There has not been very open encouragement of data centres in all locations.

Source link

‘Online education is one of the biggest finds of the last decade’

0

Kadhambari S. Viswanathan, assistant vice-president, Vellore Institute of Technology, in conversation with L.V. Navaneeth, Chief Executive Officer, The Hindu Group, at The Hindu Tech Summit 2026 on Friday (February 13, 2026).
| Photo Credit: B. VELANKANNI RAJ

Online education is one of the biggest finds of the last decade, Kadhambari S. Viswanathan, assistant vice-president, Vellore Institute of Technology, said on Friday (February 13, 2026) at The Hindu Tech Summit 2026.

She was speaking at a session, titled ‘From Campus to Corporation: Building Industry-Ready Talent for an AI First World’, in conversation with L.V. Navaneeth, Chief Executive Officer, The Hindu Group, at the event hosted by The Hindu, presented by VIT, and co-presented Sify Technologies.

“There is a lot of discussion on how online education will change and if it will entirely replace physical classroom-based education. But both can co-exist and supplement and complement each other,” she said.

Talking about digital literacy and digital wellbeing, Ms. Viswanathan underscored the need for teaching the younger generation how to use technology with caution. “They should be masters of technology and not the other way around. There is not much literacy and awareness about digital wellbeing among the people,” she added. 

There is evidence-based research on how screen time affects the childhood and the mental wellbeing of teenagers, she said.

Asked about how modern technology transforms the daily life and learning experience of students, Ms. Viswanathan said teaching was conventional earlier. But it is no longer faculty-led; it is rapidly becoming a student-led experience. “Faculty will facilitate the learning process but will not be [entirely in] control of the learning process. It is changing because of information overload available online. But, of course, human intervention is always needed,” she added. 

Ms. Viswanathan said the way the skillsets are going to progress will be very different. “There is definitely a skillset mismatch, and it is due to the lack of practical exposure during the course of study. This can be solved only when there is proper communication between the industry and the academia,” she added. 

She said that one of the biggest challenges of generative AI is how it affects the communication skills of the people. 

Source link

‘Large enterprises have to unravel business processes to make them AI-first’

0

Venkatesan Vijayaraghavan, COO, Virtusa, is in a conversation with John Xavier, Tech Editor, The Hindu, at a session on ‘Managing the post Anthropic Plug-in Era’ at The Hindu Tech Summit 2026 on Friday (February 13, 2026).  
| Photo Credit: M. SRINATH

Large enterprises have to unravel business processes to make them AI-first and to rewire business processes in terms of AI-ready, Virtusa COO Venkatesan Vijayaraghavan said on Friday (February 13, 2026) at The Hindu Tech Summit 2026.

In a conversation with John Xavier, Tech Editor, The Hindu, on the topic, ‘Managing the post Anthropic Plug-in Era’, Mr. Vijayaraghavan said, “I am assuring you all. We are definitely not in the coffin, we will surface back with much more to do. We will go into the box, but we will come out of the box because many more boxes are going to open,” he said.

Pointing to small and medium businesses, he said, “They will start new products with new models, and they may throw away the existing ones.”

Speaking about a transition towards 20% humans and 80% agents, Mr. Vijayaraghavan said, “50% of the deals Virtusa participates in, we are talking about an agent-first approach in every field.”

Pointing to the need for being super-strong on the underlying principles of core engineering, Mr. Vijayaraghavan said, “We are getting certain initial success in that segment, but we need to take this to our universities. I can do coding today in Java, tomorrow in Python, day after in something else, but today my problem is that people come and ask for those skills in a human. Without AI-first, I am struggling. It almost feels like walking to Starbucks and asking for a hot ice cream.”

To a question on the recent release of Anthropic plug-ins and its impact, he said, “I am sure many people have been having sleepless nights over the last week as materials keep coming at a speed we cannot catch up with. We are now focussed on what are the services that we cannibalise, what are the services that will go on with an AI-first approach. And our portfolio companies offer us a very good push. A good variety of our portfolio companies who are innovative, who are doing products, and we are learning a lot from them,” he said.

Source link

Experts underscore the importance of extracting only relevant data

0

Data and AI experts take part at a session, titled ‘Data Privacy as a Pillar of Resilience: Building Trust in a Digital Age’, at The Hindu Tech Summit 2026 in Chennai on Friday
| Photo Credit: B. Velankanni Raj

In a world where data are extracted from individuals with or without informed consent, data and AI experts have called for building awareness and underscored the importance of extracting only relevant data at The Hindu Tech Summit 2026, hosted by The Hindu, presented by VIT, and co-presented Sify Technologies, in Chennai on Friday.

The session, ‘Data Privacy as a Pillar of Resilience: Building Trust in a Digital Age’, featured B. Jegadeeswaran, senior general manager-IT, TVS Automobile; A.N. Srinivasan, senior vice-president-IT, SRF Ltd.; Shivashanmugam Muthu, senior director, Capgemini Technology Services India Limited; and M. Sivasubramanian, VP and CDIO, Jk Fenner. It was moderated by Nagaraj, VP-Data and Analytics, The Hindu.

Underscoring the need for ascertaining whether the data being collected are necessary, Mr. Srinivasan said, “They [those who collect data] need to tell us the purpose of data collection. I have been part of one of the State government’s digital meetings… Some of the apps [used by that government] have 25-30 fields. But it is trying to reduce the number to 10. There is an effort not to capture unwanted, unnecessary data. My view is that the awareness exists at the governmental level itself. We need to be aware of why a website is seeking a particular piece of data and the purpose. Members of the public should be conscious of the data that they are giving.”

Speaking about the working of the Digi Yatra application, which ensures paperless travel at airports, Mr. Shivashanmugam Muthu said, “The Digi Yatra application works on the basis of consent. The data are stored in the local device and encrypted. All the protocol standards are well maintained. We made it possible. Let the AI be hungry for data but gather the right data.”

Mr. Sivasubramanian argued that it would be difficult to ensure privacy in a world that extracts so much data. “Googles and Youtubes know about you better than what you know about yourself. Your photos, videos, eating habits, search history… Data is like oxygen.”

As for building trust among senior citizens who find it difficult to navigate mobile applications and hesitate to give data, Mr. Jegadeeswaran said, “It can only be done through creating awareness. Their sons and daughters have to play a role.”

Source link

Molecule Discovered To Fuel Skin Cancer and Outsmart the Immune System

0

A newly identified driver of melanoma growth not only promotes tumor blood supply but also helps tumors evade immune attack. A newly published study reports that a molecule involved in controlling gene activity also plays a central role in the growth of melanoma and in helping tumors escape detection by the immune system. Scientists at […]

Source link

Experts stress the need for continuous security awareness and AI-driven vigilance

0

Experts speak at the session titled ‘Beyond Compliance: Leveraging ISO 27001 for Integrated Data Security Management’, at The Hindu Tech Summit 2026, in Chennai
| Photo Credit: M. Srinath

Security awareness has to be a continuous activity and is everyone’s responsibility, Maharajan Suriyanarayan, Chief Information Security Officer and Vice-President-IT, Navitas Life Sciences, said in Chennai on Friday (February 13, 2026) at The Hindu Tech Summit 2026. The event is hosted by The Hindu, presented by VIT, and co-presented Sify Technologies.

While speaking at the session titled ‘Beyond Compliance: Leveraging ISO 27001 for Integrated Data Security Management’, he said awareness about security has been created over a period of time by implementing ISO 27001. 

Speaking about ways to ensure that security evolves and the steps to be taken to have a security-aware culture within an organisation, he said: “Security awareness has to be created from top to bottom, and there is a need to be aware of customer data and ways to secure them. Different kinds of programs are essential. New threats are coming, and knowledge about such threats needs to be given, from end users to the top management,” he added.

Culture is not a practice but more of a disciplined approach, Ram P., Executive Vice-President and Chief Information Officer, Virtusa Corporation, said. “We have also enabled AI to monitor employee behaviour to see if there are any aberrations,” he added. 

Balakumar M.N., Head, IT, Ucal Limited, said there is need to have a continuous assessment of security-related aspects as well. “We should assess, reassess, and document the process in such a way that we know the root causes and learn from it … due to volatile changes, be it business changes, introduction of new policies, or changes within the organisation. It has to be a cycle. Learn, unlearn, and relearn,” he added. 

Addressing a question on what differentiates a certified organisation in becoming a secure one, when there is a rapidly evolving threat landscape such as AI, Sivaramakrishnan N., Senior Vice-President, Information Security and Chief Information Security Officer, M2P, said AI has been bringing about rapid changes. “Now, everyone wants to try the Anthropic plugin. Whatever access you give, control the blast radius,” he said. While one can be allowed to dabble with a new plugin, there should be control on how many get access to it, whether it is approved by their head; the nature and sensitivity of the data has to be monitored and it should be a timed activity as well, he added. 

Mr. Ram said security is always a balancing act. “We have deployed AI to track major incidents. We simulate them. We don’t wait for an incident to happen. That way, we are protected. It has been a humongous challenge to keep up with [the developments of AI], to understand the AI narrative of the organisation, to upskill ourselves,” he added. 

Suresh Vijayaraghavan, CTO, The Hindu, moderated the session. 

Source link