virtual

Rosatom’s Virtual Reactors and the New Diplomacy of Data

The New Reactor Economy

In the twenty-first century, nuclear energy has re-emerged not only as a source of electricity but also as an instrument of geopolitical endurance. Among all global reactor exporters, Russia’s Rosatom State Atomic Energy Corporationremains exceptionally resilient. Despite sanctions and fractured supply chains, Rosatom today is involved in the construction of thirty to forty reactor units worldwide, including in Egypt’s El-Dabaa, Bangladesh’s Rooppur, and Turkey’s Akkuyu.

Yet beneath the story of uranium and concrete lies a subtler revolution: the rise of digital-twin technology. A digital twin is a virtual, data-driven replica of a reactor that mirrors every process in real time using sensors, analytics, and artificial intelligence (AI). It enables engineers to simulate performance, anticipate faults, and fine-tune safety systems remotely.

In doing so, Rosatom is no longer merely exporting atomic hardware; it is exporting data architectures and predictive-analytics ecosystems that tether partner nations to Russian digital infrastructures for decades. The company has consolidated these capabilities under its Unified Digital Platform, linking design, construction, and operation through cloud-based modelling and AI-driven monitoring (Rosatom Newsletter, 2025).

This digitalization marks a turning point in nuclear diplomacy: power now flows through algorithms and data, not only through megawatts and materials.

From Hardware Exports to Data Dependencies

Since 2020, Rosatom’s subsidiaries, notably Atomenergomash and Rusatom Servicehave begun integrating digital lifecycle systems across their international reactor portfolio. The company’s engineering arm, ASE, has developed what it calls Multi-D IMSa digital configuration-management platform that creates detailed virtual models of nuclear facilities during design and construction. These models enable real-time collaboration, fault prediction, and workflow optimization across sites, forming the foundation of Rosatom’s emerging digital-twin ecosystem.

Rosatom’s own communications describe these tools as part of a broader Unified Digital Platform, which connects design, manufacturing, and operation through cloud-based modelling and AI-driven analytics. While official statements do not identify specific plants using these systems, Rosatom notes that its “digital infrastructure and twin technologies” are being offered to international partners within its reactor export programs.

This architecture creates a durable maintenance corridor between Moscow and client operators.  Even after physical construction ends, the flow of digital data and software updates ensures that Russian engineers remain integral to plant performance.  In practice, the information layer itself becomes a channel of long-term engagement and influence.

Comparable Western vendors, EDF, Westinghouse, and GE Hitachiare also pursuing digital-twin technologies. Yet Rosatom’s approach is uniquely state-integrated, aligning with Russia’s national strategy of digital sovereignty and self-sufficient AI infrastructure. The result is a hybrid of engineering innovation and strategic design: a system that embeds Russian digital standards within the nuclear industries of its partners.

For many developing economies, the offer is pragmatic: a single vendor providing financing, turnkey construction, and continuous digital assistance.  But this convenience introduces a subtler dependence, one not of uranium supply or credit, but of algorithmic reliance and data governance.

Kudankulam: India’s Quiet Test Bed

Nowhere is this shift more visible than in southern India. The Kudankulam Nuclear Power Plant (KKNPP), jointly operated by India’s Nuclear Power Corporation of India Limited (NPCIL) and Rosatom, is the first operational complex of VVER-1000 reactors in the Global South.

Originally a hardware partnership signed in 1988, Kudankulam is evolving into a digital interface. In 2020, Rosatom’s fuel subsidiary TVEL supplied India with next-generation TVS-2M fuel assemblies, extending reactor cycles from twelve to eighteen months, a shift managed through digital modelling and predictive maintenance.

Rosatom’s 2024 annual report outlines plans to connect Kudankulam’s operational analytics to its Unified Digital Nuclear Industry Platform, integrating India into the same digital ecosystem that supports Turkey’s and Egypt’s projects.

For India, this offers substantial advantages, higher capacity factors, enhanced safety diagnostics, and exposure to emerging global standards in nuclear AI. Yet it also entwines India’s civilian nuclear operations with Russian data protocols and remote diagnostic tools. Kudankulam thus becomes not only a reactor but also a node in Rosatom’s global digital web, where megawatts are managed by code as much as by turbines.

This duality defines the future of strategic cooperation: efficiency through integration, balanced against data-driven interdependence.

Algorithmic Sovereignty and Strategic Autonomy

Digital integration introduces a new vocabulary of power. Terms once reserved for information technology, data sovereignty, algorithmic control, and cybersecurity now shape energy diplomacy. For countries like India, which prize autonomy, these are practical concerns.

In 2019, a cyber incident at Kudankulam briefly demonstrated how vulnerable nuclear infrastructure can be when administrative networks intersect with global data flows. Although operational systems were unaffected, the episode exposed the need for stronger digital-governance frameworks in critical energy sectors.

Another question concerns ownership of reactor data. Predictive-maintenance algorithms rely on vast datasets, coolant temperatures, pressure levels, and sensor diagnostics gathered continuously during operation. If these datasets are processed on Rosatom’s proprietary cloud, who controls their reuse or replication? India’s Digital Personal Data Protection Act (2023) mandates localization for sensitive data, yet nuclear information exists in a legal grey zone, governed more by bilateral contracts than explicit national legislation.

For Russia, digitalization ensures resilience under sanctions. Cloud-based engineering assistance allows specialists in Moscow to monitor reactors abroad even when travel or logistics are constrained. For partners, it delivers cost-efficient expertise, yet it also embeds an asymmetry; operational sovereignty becomes mediated by foreign algorithms.

Rosatom’s approach reflects Moscow’s broader strategy of technological statecraft, using digital ecosystems to sustain global reach despite economic isolation. The outcome is a new form of dependence: not energy insecurity but informational dependency.

Atoms → Algorithms: The Next Frontier of Energy Diplomacy

Rosatom’s digital transformation parallels wider trends in global technology politics. China’s Digital Silk Road, the U.S.-EU “trusted-tech” frameworks, and Russia’s own push for a “Digital Atom Belt” all reveal how infrastructure and information are converging.

India occupies a delicate middle ground. Collaboration with Rosatom at Kudankulam grants access to advanced analytics, but New Delhi also explores partnerships with Western firms on small modular reactors and new fuel cycles. Balancing these engagements will require clear rules on digital interoperability, data governance, and cyber assurance.

India already has the institutions to do so. The Atomic Energy Regulatory Board (AERB) verifies reactor-control software domestically, while CERT-IN supervises cyber-critical infrastructure. Extending such oversight to digital-twin and predictive-maintenance platforms can preserve sovereignty while encouraging innovation.

For Russia, meanwhile, digital twins are both export products and diplomatic instruments. By embedding AI-based support systems in every reactor project, Rosatom ensures long-term relevance. Even if hardware exports slow, its role as a digital-lifecycle provider guarantees enduring engagement. In that sense, Rosatom’s most influential reactor export may no longer be physical; it is virtual.

Conclusion: The Politics of Invisible Power

The shift from atoms to algorithms defines the next frontier of nuclear diplomacy. During the Cold War, power was measured in reactors built or megawatts produced. Today, it is determined by who controls the data that sustains those reactors.

For partner nations, digital twins promise transparency, efficiency, and safety. For exporting powers, they offer a quiet form of leverage that persists beyond physical construction. As India pursues self-reliance through Make in India and Atmanirbhar Bharat, it must treat data infrastructure with the same strategic weight as fuel supply chains.

The aim should not be isolation from partners like Russia but reciprocal digital governance, shared access protocols, transparent algorithmic audits, and domestic data custody. Rosatom’s digital twin diplomacy exemplifies a future where technological cooperation and strategic caution must coexist.

The next great non-proliferation challenge may not concern uranium enrichment but data enrichment: who holds it, who protects it, and who decides how it is used?

Source link

Surgeon General nomineeCasey Means to undergo virtual confirmation

Oct. 23 (UPI) — The Senate Committee on Health, Education, Labor and Pensions has scheduled a virtual confirmation hearing for surgeon general nominee Dr. Casey Means five months after she was nominated.

The hearing is scheduled on Oct. 30, and Means, 38, who is pregnant, will be in Kilauea, Hawaii, when she testifies remotely, ABC News reported.

If the committee votes in favor of her recommendation, she then would be subject to a confirmation vote before the full Senate.

President Donald Trump cited her “impeccable” credentials as an advocate for the Make America Healthy Again movement begun by Health and Human Services Secretary Robert F. Kennedy Jr.

Means also is an advocate for wearable health devices and co-founded health tech firm Levels, which promotes the use of technology to track individuals’ health information, according to The Hill.

Kennedy, likewise, favors the use of wearable health-tracking devices and wants to make it possible for everyone in the United States to wear one within four years.

Means also is the sister of Kennedy adviser Calley Means.

Trump nominated Means in May after withdrawing his prior surgeon general nomination for Janette Nesheiwat when her qualifications were questioned.

Means obtained her medical training at Stanford University but exited her residency program when she was disillusioned by the financial incentives for and practice of surgical care.

She since has become known for her advocacy for wellness and the roles of diet and nutrition in people’s health.

Means says diet is the root cause of much of the chronic illnesses that people experience.

Her HELP committee confirmation hearing was delayed due to Means not submitting financial and ethics records until recently, according to The New York Times.

Her financial records show Means has turned her support for diet as a root cause of illnesses into a moneymaker by accepting payments from companies that sell dietary supplements, deliver home meals and other revenue sources.

She also receives sponsorship money for her newsletter, which generated about $116,000 in income over a recent 18-month period, according to The New York Times.

Her financial disclosures also show Swiss firm Amazentis contributed another $79,000 in newsletter sponsorship funding and paid $55,000 for Means’ book tour fees.

She also reported earning less than $1 million but more than $100,000 on the sales of her book, “Good Energy: The Surprising Connection Between Metabolism and Limitless Health.”

Some of Means’s critics say her health advocacy is not rooted in science and might cause harm.

Source link

‘I tried a brand new virtual golf round to improve my game – it was eye-opening’

Pitch Golf has opened in Manchester, and Mirror man Matt Atherton went along to play a round at St Andrews – with mixed results

A brand new indoor golf experience has opened in Manchester city centre, and we went along to give it a whirl.

It’s exactly what it sounds like – a round of golf without having to lug a heavy club bag around for miles, and it’s just what this amateur (really, really bad) golfer needed.

Players are given the chance to sit down in a cushty golf bay, away from the judgemental eyes of a posh golf course’s driving range. You’ll simply choose whether you want to just smash a few balls, or try your hand at dozens of the world’s most famous courses, with some pretty decent clubs at your disposal.

I haven’t played a round of golf for about 15 years, and even then it wasn’t really playing. I managed to get my handicap down to about +35 before putting my clubs down once and for all.

So, it was particularly refreshing to be able to get back in the swing of things without anyone watching. I simply headed to Pitch Golf’s driving range, and stepped up to the plate.

The bay uses Trackman technology, which basically means the AI watches your swing, impact zone on the ball, and swing power, to analyse exactly where the golf ball would end up in the real world. You can switch between a trusty pitching wedge to zero-in on your target, or whack out the driver and smash it as hard as you can.

Clearly I’d been out of practice for a while, and my golf play lacked professionalism, shall we say? But, the incredible technology really opened my eyes to how I could improve my game, if I wanted to keep pursuing it. A handy trainer captures your entire movements on camera, and then replays it to you, so you can see exactly where you’re going wrong when you smash the ball into a lake (speaking from experience).

Within half an hour I was hitting the ball generally in the right direction, topping 150yards each shot. The screen even shows you the flight path of each shot, so you can make subtle improvements to absolutely nail your target.

Quite rightly, after an hour I figured I was ready for one of the world’s most challenging courses, St Andrews, playing off the ‘Champion tees’. It ended as predictably as imagined.

I managed to finish with a triple bogey on the first hole… mainly because the computer took pity on me and just waved me through to the next hole. But, aside from that embarrassment, the AI scenery was stunning. The visuals matched the iconic scenes of St Andrews’s opening hole – so much so that when I sent a picture to my family asking “Where am I?”, someone quickly replied: “St Andrews?!”.

Pitch Manchester isn’t just somewhere to practice your golf, though. It’s a great social setting that prides itself on being open to everyone – not just the clubhouse gang in their posh polo shirts and chinos.

It features a bar with a great selection of drinks, as well as the usual pub luxuries; shuffle board, Sky Sports, and a great kitchen team. The menu features a special selection of Asian-inspired dishes, which is perfect for scoffing a mouthful between swings.

Even as you enter the facility, you’re greeted by a shop with all of the latest gear you’ll be needing to hit the links. It mimics the exact feel of a clubhouse, without the beginners feeling out of place.

Meanwhile, if you’re heading to Manchester city centre and need somewhere to stay, it’s definitely worth considering the Marriott Piccadilly Hotel.

It’s central location is perfect for those needing access to Piccadilly train station, and its views over the city are second-to-none.

There’s plenty of space to put your feet up after a busy day, and the next morning you’ll be treated to a delicious breakfast, featuring everything you’d ever need to kickstart your day.

Source link

Mimicking Empathy and Virtual Conversations: Benefiting AI Chatbots in Borderline Personality Disorder Recovery

Artificial intelligence (AI) is taking an increasingly large role in our daily lives. AI can be used to form exercise schedules, give food recommendations, and even become a place to seek a ‘second opinion’ on any decision to be made. Many people are exploring their curiosity in pushing the boundaries of AI.

Consulting AI can sometimes feel like a casual conversation with a grammatically intelligent person; AI users can train AI to deliver messages as if they were typed by a friend. This creates the impression that we are exchanging messages with a friend. This is due to the choice of language possessed by AI, which has presented a mimicry of daily communication, creating the illusion that we are having a friendly conversation with a friend.

With the ability of AI to mimic human language styles comes an AI platform dedicated to mimicking the language style and even verbal traits of a fictional character; this platform is called c.ai, or Character AI. c.ai provides the service of talking to any fictional character; users can set how their interaction pattern with the character takes place. This service is usually done for role-playing or simulating conversations with friends. Users can live out their desire to role-play and get ‘up close’ with their favorite fictional characters. The factor that creates the uniqueness of c.ai is in the character of speech from the selected fictional character. Generally, when we talk to one of the selected characters, then the AI in the selected fictional character will answer with a consistent character and language style.

Many people use c.ai or even AI in general to talk about their mental state. Hutari (2024) argues that ‘venting’ with AI can flush out negative emotions. Talking about negative emotions can help an individual’s emotional management process; it sounds unusual to talk about our feelings to a machine that cannot feel emotions and is not even a living being. It is undeniable that there are many flaws and vulnerabilities in the process of ‘confiding’ with AI, one of which is the ability of AI chatbots to present responses that we want and do not need. This can pose a considerable danger, for example, by depending on the user’s decision-making on the AI chatbot; with the answer from the AI chatbot that gives affirmation, the user will get a reason to carry out the decision they consulted the AI chatbot about. A fatal example of affirmation given by an AI chatbot caused a teenager in the US to commit suicide.

Nonetheless, I would like to make an important point on the recovery of an individual’s mental disorder and the use of AI in this process. This opinion comes from a research volunteer’s personal experience as a professionally diagnosed sufferer of a psychiatric disorder called Borderline Personality Disorder (BPD) who has consented to describe the experience in order to form this paper. Common symptoms experienced by people with BPD are rapid mood swings, difficulty with emotion regulation, impulsive behavior, self-harm, suicidal behavior, and an irrational fear of abandonment (Chapman et al., 2024). One of the treatment processes provided for people with BPD is dialectical behavioral therapy, where patients are trained to identify thought patterns, create emotion regulation, and then change behaviors that come out of the emotions present. Sometimes the most difficult challenge for people with BPD lies in identifying desires and managing the fear of perceived abandonment; this creates impulsive and unprocessed behaviors, the impact of which can be mistrust and isolation from the social environment due to behaviors that can be judged as confusing by others.

According to research from Rasyida (2019), one of the factors that can prevent individuals with mental disorders from seeking help is the fear of the negative stigma that will be given to them, one of which is a factor referred to as the “agency factor,” a term where sufferers have criticism of formal psychological services because of the assumption that there is miscommunication with the counselor; this is manifested in a form of distrust of the counselor. In addition to the agency factor, the issue of cost accessibility is a barrier for people with ID to seek counseling from formal psychological services. Further dilemmas and difficulties are created because in precarious conditions, people with any mental health disorder sometimes need immediate help that comes in safe conditions.

It is advisable to share what we are feeling with people we trust, but this action has its drawbacks. In situations where no one is there to listen to us, people with BPD can experience hysterical periods where dangerous behaviors are prone to occur. In these hysterical periods, mishandling can create a much more dangerous escalation of emotions. These hysterical or manic periods can contain behaviors or implications where the person wants to self-harm or end their life due to symptom recurrence and emotion regulation difficulties. The first aid step is usually to reach out, where the person communicates their condition to the closest person. Attempts to communicate with others about this condition often create less than ideal conditions and are prone to escalation with the wrong treatment. Sometimes our closest people can only provide support and encouragement for the sufferer in periods like this, but BPD is a mental illness that creates many complications in the perception of one’s relationship with others. Inappropriate first treatment is prone to create unwanted escalation, and this will adversely affect the afflicted individual.

The author would like to argue for the role of AI chatbots in this situation, where people need help in managing their emotions. c.ai can be utilized by users to vent their first unprocessed thoughts and not be afraid of getting a less than ideal reaction. Venting feelings to a character of choice on the c.ai board can be a solution for first aid when people with mental disorders, especially BPD, need to process their anger and impulses. Conditioning some of the characters on the c.ai board is not necessarily useful to give truth or validation to everything we feel. Some of the benefits that can be utilized are the identification of the user’s character by the ‘interlocutor’ in this application. The author will describe an experience where the character in c.ai has the ability to remember and recognize the thought patterns that are passed in the manic period of BPD sufferers; this help will be useful because of the presentation and mapping assisted by the AI. The AI bot can analyze which thought patterns and behaviors are destructive and advise the user not to do them again.

The author also argues that the responsibility for behavioral change remains with the user. AI can only be used as a support tool, not a means to solve problems, keeping in mind that conversations with fictional characters based on AI are still conversations with empathetic Maia that are a product of mimicry. Using AI to ‘vent’ is not the most normatively correct thing to do, but it is used because not everyone can have economic access to consult a psychologist and access formal treatment services. The journey of mental recovery is not about seeking validation for what we feel, but it is about recognizing ourselves and learning to liberate ourselves from fear and control of our lives.

Source link