Bot

Human or Bot? Who’s Really on the Other Side?

From what one should eat to what one should say, AI chatbots on your phone have the final say. The choice of bots, though, is totally in your hands, but what choice you will make with it is barely in your hands. Are you by any chance handing your decision-making power to bots, which you assume makes your life easy? If yes, then let’s consider a few things before your next chatbot conversation. First, understand the dual system model by Daniel Kahneman. According to that, there are two types of systems in the human brain. System 1 is associated with fast, intuitive, emotional, and automatic thinking. System 2 is associated with analytical, slow, effortful, and deliberate thinking. The majority of the technology that is available for the general masses urges individuals to use system 1, as it does not require much effort. Decision-making needs system 2 and is complex and requires time and effort, though this is something that people tend to avoid at all costs. Machines were built to reduce human effort, and artificial intelligence is there to reduce the decision-making efforts, something that differentiated the individual from the technology or innovation earlier.

Now at the state level, artificial intelligence is being integrated; take the example of the United States National Defense Strategy of 2022, where the inclusion of artificial intelligence in decision-making processes was of prime importance. At the systematic level, unfortunately, until now, there have not been concrete efforts towards establishing rules regarding artificial intelligence, except for the Bletchley Declaration, which was a landmark international agreement on AI safety. Though at the individual level, rather than being careful, people are playing with and handing over their decision-making power. As was reported by the BBC, in an interview, Megan Garcia, the mother of a 14-year-old, said that an AI chatbot encouraged her son to commit suicide. Another case involving a young Ukrainian woman with poor mental health received suicide advice from an AI chatbot. Another report by Vice of a person who committed suicide after having multiple conversations with a chatbot over environmental issues. AI chatbots that run on algorithms have been taken as emotional support beings, which they are not.

They are given different names to grab the attention of the user, such as “your goth friend,” “your possessive girlfriend,” and several others. They are targeting the emotional side, or System 1, of the user, and they have been quite successful in that. Everyone today almost has an AI chatbot with whom they have a conversation at least once a day. According to Chabot’s 2025 statistics, more than 987 million people use AI chatbots today. ChatGPT dominates the AI chatbot market share with 81.85%, using it globally, followed by OpenAI’s GPT-4, Microsoft Co-Pilot, Google Gemini, Claude, and DeepSeek with 11.05%, 3.07%, 2.97%, 1.05%, and 0.01%, respectively. With that, it is becoming dangerous and needs to be handled with more care and caution. The responsibility lies on individuals as much as it lies on the state and the international organizations.

Technology has been advancing for decades, and it has been creating ease and comfort for its users. Artificial intelligence, being one such technology, is beneficial too, but it should be used to enhance the mental capabilities and not hand over one’s control over things. AI is expanding and advancing at an immeasurable speed, and it will not wait for people to wake up and make better decisions for themselves. Social media platforms will not adjust themselves to the needs of the time, as they are markets, and all they care about is what is bought, which is the thing that should be sold. If people are buying the emotional support AI, then there will be multiple chatbots with attention-grabbing titles.

An individual might take it as a joke or play with it for fun, but what they do not realize is that they are providing their personal and sensitive information to a machine whose data can be sabotaged. People nowadays, without realizing, would jump on the ongoing trends without realizing what it will do to their data. The trend of Ghibli-style photos, where photos were being generated to the extent that it led to the melting of OpenAI’s servers, prompted the company to temporarily implement rate limits. In addition to that, it resulted in an intellectual property issue involving Studio Ghibli centers. As it mimicked the iconic style of Studio Ghibli, which has been working for decades, AI stole the art, and there was no genuine accountability. This is how dangerous it gets: stealing someone’s work and then getting away with it without being charged or held accountable. This intellectual property theft by AI resulted in Hollywood writers’ protest, leading to the establishment of the 2023 WGA AI contract. WGA (Writers Guild of America) led to AI not being treated as a writer and prevented it from getting any credit or being considered “literary material.” Where the threat is so imminent, laws are not efficient, control is lost, and profit is being generated, would you really let bots decide what you will do in your life?

Source link