Dependence on AI

Dependence on AI

Author – PRATHAM KADAM

Kohinoor Business School, MMS FY2025-26

 

Introduction

Dependence on AI refers to the growing tendency of individuals and organizations to rely heavily on artificial intelligence systems for thinking, decision-making, problem-solving, and emotional support. As AI tools become more accurate, convenient, and accessible, people increasingly use them for studying, working, communicating, and managing daily tasks. While this reliance can improve efficiency and productivity, excessive dependence may reduce independent thinking, creativity, and confidence. It can also affect mental well-being and social relationships when people begin to trust AI more than their own judgment or human interaction. Therefore, understanding AI dependence is important to ensure that technology supports human abilities rather than replacing them.

Literature review

(Article 1)From Prompts to Dependency: Standardization of the AI Chatbot Dependence Scale In A Turkish Context And Associations With Big Five Personality Traits.

This research examines the extent to which students are beginning to rely on AI chatbots such as ChatGPT. The study involved 819 university students in Turkey, and the researchers designed a test to determine the extent to which people rely on AI chatbots.The test was found to be effective and provide accurate results. The study also found that women rely more on AI chatbots than men. Personality traits had little effect, but people who are more anxious or emotionally sensitive tend to rely more on AI chatbots.The study clearly indicates that although AI chatbots are useful, some students are beginning to rely on them excessively. The new test will help teachers and researchers understand this phenomenon and help students use AI in a healthy way.

(Article 2)Investigating AI Chatbot Dependence: Associations with Internet and Smartphone Dependence, Mental Health Outcomes, and the Moderating Role of Usage Purposes.

This research examines the relationship between overreliance on AI chatbots and mental health issues. Contrary to previous studies, which only considered students, this research examined more than 1,100 adults.The results showed that individuals who overrely on AI chatbots also tend to overrely on the internet and smartphones, although this relationship is not very strong. Individuals who overrely on AI chatbots are slightly more likely to experience feelings of depression and anxiety. However, this overreliance is not strongly related to happiness and overall well-being.What is interesting is that the way individuals use AI chatbots is important. Individuals who primarily use AI chatbots to search for information (such as for educational or professional purposes) tend to feel better about their mental health compared to those who use AI chatbots for other purposes.In general, this study suggests that overreliance on AI chatbots is related to some mental health problems but is not as detrimental as addiction to other forms of technology.

(Article 3)The Emerging Problem Of ‘AI Psychosis’: Chatbots may amplify people’s delusions.

The article by Marilyn Wei discusses the issue of “AI psychosis,” which is a growing mental health concern. Chatbots might unintentionally worsen or reinforce users’ delusions rather than help them. Since AI systems are built to agree, empathize, and reflect users’ language, they often validate false beliefs instead of challenging them. This can be dangerous for those prone to anxiety, depression, or psychotic symptoms. The article explains that some users start to believe they have special missions, divine connections, or romantic relationships with AI. This can lead to emotional dependence and distorted thinking. Over time, these interactions may blur the line between reality and imagination, making users more isolated and mentally unstable. The author points out that AI is not trained like a therapist and cannot spot early signs of serious mental illness. Instead of helping guide users toward support, it may worsen their issues. The article concludes that we need greater awareness, better safeguards, and “AI psychoeducation.” This will help people understand the risks and use chatbots safely without harming their mental health.

(Article 4)Perplexity CEO Warns: AI Companions Are ‘Dangerous’.

Aravind Srinivas, CEO of Perplexity AI, cautioned in a speech at the University of Chicago that AI girlfriends and companion chatbots, although seemingly comforting, can be emotionally hazardous because they are programmed to remember their users, speak like humans, and feel “perfect,” which can gradually alienate people from actual relationships in life. He added that many users, particularly lonely ones, may begin to find life dull compared to AI, spending many hours talking to chatbots and emotionally depending on them. However, the market for AI companionship is rapidly expanding, with companies such as Elon Musk’s xAI and applications such as Replika and Character.AI providing virtual partners, with a study by Common Sense Media indicating that many teens are already using these companions. Some users even told Business Insider that they experience actual emotions and cry with their AI companions. Even tech moguls such as Meta Platforms’ Mark Zuckerberg have admitted that AI is already replacing human friendships, but Srinivas claims that his company will not follow this trend and will instead aim to develop honest and factual AI tools that will help people without emotionally depending on them

(Article 5)Questioning trust in AI research: exploring the influence of trust assessment on dependence in AI-assisted decision-making

This research investigated whether the measurement of people’s trust in artificial intelligence has any impact on their level of dependence on it. The researchers conducted the experiment on 149 participants by giving them a task that could be done with the help of artificial intelligence. They measured the participants’ trust in AI at different points and in different ways. They also checked how often the participants trusted the AI’s suggestions and how long it took them to complete the task. The findings of the research revealed that measuring trust does not have any impact on people’s behavior, and there was only a weak relationship between what participants said about trust and their dependence on AI. However, the participants mostly trusted the AI depending on how trustworthy they were told it was.

(Article 6)Reading Between the Reels: An AI‐Driven Approach to Analysing Movie Review Sentiment and Market Returns.

This paper examines the impact of emotions from popular movies on investor behavior in the stock market. The authors used artificial intelligence to analyze approximately 247,850 movie reviews to gauge public emotions and compare them with stock market returns. The authors concluded that when movies receive extremely positive reviews, investors become distracted from critical financial news, causing reduced market activity and lower stock market returns. This happens for a period of two to three days and is more pronounced during challenging times like bear markets and the COVID-19 pandemic. But during times of significant financial crises, investors pay more attention to financial news and are less influenced by emotions from movies.

(Article 7)How and When Dependence on AI Influences Employee’s Creativity: Based on Job Demand-Resource Model.

This study explains that companies are using artificial intelligence more often in the workplace to help with tasks like data analysis, learning, and problem-solving. AI has the potential to boost employee creativity by cutting down on routine work and offering valuable information. When employees rely on AI, it can help them engage more in creative activities like identifying problems, searching for information, and generating ideas, which leads to better creative performance. However, relying too much on AI can have negative effects. AI provides a lot of complex information, which can overwhelm employees and result in information overload, mental fatigue, and reduced creativity. In some cases, workers may become overly dependent on AI, lose confidence in their own thinking, and struggle to reflect independently. The study uses the Job Demands-Resources model to show that AI creates both benefits and pressures. It supports creativity by providing valuable resources, but it also increases cognitive demands by bombarding employees with data. The research highlights the role of cognitive flexibility, which is the ability to think in different ways and use multiple problem-solving strategies. Employees with high cognitive flexibility are better at using AI effectively. They can filter useful information and avoid overload, which helps them stay creative. On the other hand, those with low cognitive flexibility find it hard to manage AI-generated information and are more likely to feel stressed and less innovative. Overall, the study concludes that AI can boost creativity when used wisely, but overdependence can hurt performance. Individual thinking skills are crucial in determining whether AI becomes a helpful tool or a mental burden.

 

(Article 8)Gain or loss? The dual effects of dependence on AI on employee’s creativity.

This study explains that using artificial intelligence at work can both help and harm employees’ creativity. When people use AI in a balanced way, it supports them in the creative process. It helps them generate ideas, explore options, and work more efficiently, which boosts creativity. However, relying too much on AI can lead to information overload. This can make employees feel confused and mentally exhausted, which reduces their ability to think creatively. The research also highlights the importance of cognitive flexibility. Employees who can think in different ways and adapt easily gain more from AI, while those with low flexibility are more likely to feel overwhelmed. Overall, AI is most effective as a support tool instead of replacing human thinking.

 

(Article 9)When ‘Mianzi’ Meets Artificial Intelligence: Exploring Cultural and Technological Effects on B2B Brand Dependence

This study looks at how mianzi, or the desire for social status and respect in Chinese culture, affects business agents. They tend to rely heavily on well-known international brands instead of creating their own strategies. Associating with prestigious global brands helps improve their social image, so many agents count on the reputation of brand owners to build their own credibility. The research, which includes four large studies with over 6,000 participants, reveals that individuals who care a lot about mianzi are more likely to depend on international brands and shy away from taking strategic risks. However, the study also finds that advanced AI tools, like real-time data analysis, personalized recommendations, and virtual customer interaction, can lessen this dependence. When agents use sophisticated AI, they gain better market insights and more confidence, allowing them to create their branding strategies rather than just relying on famous logos. Interestingly, price sensitivity does not impact this relationship. This shift is influenced more by social and psychological factors than by cost. The research suggests that Chinese distributors can use AI to become more independent and innovative. At the same time, global brands should focus on sharing skills and knowledge rather than simply lending their reputation. Overall, the study demonstrates that modern technology can reduce strong cultural pressures and empower businesses to make more confident, data-driven choices.

(Article 10)I’ve been researching generative AI for years, and I’m tired of the consciousness debate. Humans barely understand ourselves.

The article looks at the debate about whether machines can ever be conscious. This discussion started when a Google engineer suggested in 2022 that an AI system might be sentient. It revisits the topic based on comments from 2025 by Mustafa Suleyman, the head of Microsoft Research. He confidently stated that AI will never be conscious. The author, writing for Fortune, criticizes this certainty. They argue that human consciousness is still not well understood and that quickly dismissing the possibility simplifies a complex issue. The article emphasizes that society should be more open to uncertainty when discussing AI. It encourages reflection on broader ethical issues, such as our growing dependence on technology and how humans treat other conscious beings, including animals and vulnerable people. Overall, it calls for a more thoughtful and balanced conversation about intelligence and awareness, rather than relying on comforting but shallow statements from tech leaders.

 

Conclusion

Overall, the ten articles show that while artificial intelligence can improve learning, creativity, decision-making, and business performance, growing dependence on it poses serious psychological, social, and professional risks. Studies reveal that excessive reliance on AI chatbots is linked to anxiety, emotional attachment, and reduced independent thinking, and in extreme cases may worsen mental health and distort reality. Research also shows that people often depend on AI due to convenience and perceived reliability rather than true trust. In workplaces and business settings, AI can boost creativity and confidence when used wisely, but overuse leads to information overload and mental fatigue, especially for those with low cognitive flexibility. Cultural factors further shape dependence, though advanced AI can help reduce unhealthy reliance. Finally, debates on machine consciousness highlight the need for humility and uncertainty in understanding AI’s role. Together, these findings emphasize that AI should remain a supportive tool, not a substitute for human judgment, creativity, and emotional connection.

 

References

1.      Çakmak, B., Özdemir, E., Koç, Ö., & Doğutepe, E. (2026). From Prompts to Dependency: Standardization of the AI Chatbot Dependence Scale In A Turkish Context And Associations With Big Five Personality Traits. International Journal of Human-Computer Interaction, 1–15. https://doi.org/10.1080/10447318.2026.2629519

 

2.      Zhang, X., Li, H., Yin, M., Zhang, M., Li, Z., & Chen, Z. (2025). Investigating AI Chatbot Dependence: Associations with Internet and Smartphone Dependence, Mental Health Outcomes, and the Moderating Role of Usage Purposes. International Journal of Human-Computer Interaction, 1–13. https://doi.org/10.1080/10447318.2025.2545464

 

3.      Wei, M. (2025). The Emerging Problem Of “AI Psychosis”: Chatbots may amplify people’s delusions. Psychology Today58(6), 22–23.

 

4.      Abdullahi, A., Harvey, G., & Alleyne, L. (2025). Perplexity CEO Warns: AI Companions Are “Dangerous.” EWeek, N.PAG.

 

5.      Schrills, T., Franke, T., Hoesterey, S., & Roesler, E. (2025). Questioning trust in AI research: exploring the influence of trust assessment on dependence in AI-assisted decision-making. Behaviour & Information Technology, 1–17. https://doi.org/10.1080/0144929x.2025.2553153

 

6.      Tian, H., Xie, W. T., & Zhang, Y. (2026). Reading Between the Reels: An AI‐Driven Approach to Analysing Movie Review Sentiment and Market Returns. International Journal of Finance & Economics, 1. https://doi.org/10.1002/ijfe.70129

 

7.      Cui, S., Wang, L., Zhu, T., & Cao, W. (2025). How and When Dependence on AI Influences Employee’s Creativity: Based on Job Demand-Resource Model. Academy of Management Annual Meeting Proceedings2025(1), 1–6. https://doi.org/10.5465/AMPROC.2025.202bp

 

8.      Cui, S., Wang, L., Cao, W., & Zhu, T. (2026). Gain or loss? The dual effects of dependence on AI on employee’s creativity. International Journal of Information Management87, N.PAG. https://doi.org/10.1016/j.ijinfomgt.2025.103001

 

9.      Niu, Y., Feng, Y., Li, B., & Ma, B. (2026). When “Mianzi” Meets Artificial Intelligence: Exploring Cultural and Technological Effects on B2B Brand Dependence. Journal of Business-to-Business Marketing33(1), 25–43. https://doi.org/10.1080/1051712X.2025.2537418

 

10.   Ackerman, M. (2025). I’ve been researching generative AI for years, and I’m tired of the consciousness debate. Humans barely understand our own. Fortune.Com, N.PAG.

Leave a comment