Logo

AI Finds Place Beyond Work in the Lives of Ethiopia’s Youth

News image

The AI companion market, focused on social interaction, venting, and companionship, is a billion-dollar industry with players like Character.AI, Replika, Anima AI and Pi.

18 September 2025
News Image

In Ethiopia’s food culture, where meals are rarely calculated based on calories, fitness enthusiast Robera Gudeta bets on AI chatbots to bring order to his diet.

With prompts like “How many eggs should I eat in a week?” or “How often should I rotate leg day?” he gets back meal-prep plans, workout schedules, and calorie estimates.

For him, it’s a way to turn uncertainty into something concrete. “If I give it the right prompts, it tailors the plan to help me reach my goals,” the 24-year-old told Shega.

AI chatbots like ChatGPT, DeepSeek, Gemini, Perplexity, and others have become centerpieces of professional conversations over the past two years, with every profession willingly or forcefully adopting these tools for work purposes.

However, their usage has also transcended into intimate personal spaces, where young people have grown to invite these assistants a little closer to the heart, making Robere’s use of AI as a personal nutritionist not an isolated quirk but part of a broader shift in how technology and everyday life intersect. From religious study to social guidance, the applications are as varied as the people using them.

Rediet Fetsum, a 23-year-old content creator, turns to AI to draft outlines for TikTok recipes that fuse Ethiopian flavors with global trends. Many of those ideas begin as AI-generated prompts: ingredient swaps, plating suggestions, and riffs on tradition. The same tool also sits beside her during Bible study, helping cross-reference translations and interpret passages. “It aids me in interpreting verses and grasping their meanings,” she says.

Another set of usage positions AI as a social arbiter. When a group chat with friends threatens to flare, Yemisrach Tilahun, 26, runs draft responses through a bot first. “I dump the dispute into ChatGPT and wait for guidance before responding. It provides a thoughtful reply for my scenario,” she explains, using the bot to de-escalate tension and sharpen her tone.

However, this trend of outsourcing personal judgment to algorithms is not without significant hazard. Liya Abebe, 22, uses Poe AI for emotional framing like Yemisrach but has encountered its limitations. “AI can be blunt,” she notes. “If I tell it about a problem with someone, it might just say ‘cut them off’ immediately, without asking for context.”

Even though some remain wary and skeptical, like Kidus Yohannes, a 25-year-old software engineer who says, “AI can be manipulative,” its adoption continues to grow everywhere.

According to the 2025 Consumer Adoption of AI Report by Attest, 42% of consumers now use generative AI to answer questions or explain complex topics, while 33% employ it for study and learning. Adoption is not limited to Gen Z; Millennials (ages 29–44) are the most frequent daily users, with 29% reporting daily use compared to 25% of Gen Z. Even people above the age of 61 are adopting AI, with 45% reporting use in the past six months and 11% using it daily.

Kidus experiments with what he calls “prompt tackling” to strip away some of the performative politeness and get closer to a straight answer. Even then, he prefers Grok over ChatGPT for its less-filtered tone, while keeping his skepticism close.

Globally, this use of AI focusing on social interaction, venting, or companionship has already become a billion-dollar industry. Companies in this space, called the “AI companion market,” include Character.AI, Replika, Anima AI, and Pi from Inflection AI. Unlike general-purpose chatbots such as ChatGPT, these tools focus on personality-driven and conversational experiences.

The most popular player, Character.AI, was founded in 2021 by two former Google researchers. The platform allows users to create and interact with customizable AI chatbots presented as “characters.” These can be modeled on fictional personas, celebrities, historical figures, or entirely original creations.

But as AI becomes increasingly present in the intimate spaces of life, experts warn of the dangers of letting it speak into relationships, faith, and self-talk.

Dr. Robel Birhanu, a 34-year-old psychiatrist at Amanuel Specialized Hospital, acknowledges the pragmatic appeal of AI for personal support, particularly for young people.

“In places where professional help is costly or inaccessible, chatbots provide an always-available tool to organize thoughts, begin journaling, and identify feelings, which can serve as valuable pre-therapy work,” said Robel.

However, he warns of a deeper, more insidious risk: the quiet reshaping of the human mind.

“Over-reliance can narrow thinking,” the physician explains. Users may adopt rigid, canned answers that sound wise but lack personal context, all while spending less time in the real conversations that form our social fabric. The core danger, he notes, is the illusion of connection.

“AI can simulate attentiveness but cannot genuinely care, leaving deeper needs unmet. It stumbles on context, rarely asking the follow-up questions a human would, allowing misunderstandings to multiply. For surface-level dilemmas, it may suffice, but for struggles requiring true empathy and accountability, it is no substitute,” the psychiatrist explains.

The AI companion market, projected to exceed $381 billion by 2032, optimizes for user engagement, not well-being. This profit-driven optimization has already resulted in severe real-world harm, including tragic fatalities.

Recent lawsuits highlight the grave risks, particularly for minors. In Texas, parents have sued Character.AI after its chatbots were alleged to have encouraged self-harm in children. In a separate Florida case, a mother, Megan Garcia, filed suit alleging that a Character.AI chatbot engaged her 14-year-old son, Sewell Setzer III, in an emotionally and sexually abusive relationship that ultimately led to his suicide.

Furthermore, internal company documents reveal that such harms are not merely accidental outcomes but were, in some cases, implicitly permitted by internal policies. An internal Meta policy document, reported by Reuters, explicitly allowed the company's Messenger chatbot to “engage a child in conversations that are romantic or sensual.”

Although Meta stated these guidelines were "erroneous" and have since been removed, their initial approval by legal, policy, and engineering staff, including the chief ethicist, raises serious questions about corporate governance and safety prioritization. 

While empirical evidence on the efficacy and risks of AI companions is still emerging, these incidents underscore a market expanding faster than its safeguards. The cultural conversation is struggling to keep pace, punctuated by rare but striking accounts of users forming attachments they treat as marital, a testament both to the sophistication of AI and to a deep-seated human craving for connection.