top of page
Search

Digital Slavery or Freedom? — Has Turkey Long Since Awakened to the Reality of Artificial Intelligence?

  • 14 hours ago
  • 15 min read

Artificial intelligence (AI), with its existing capabilities and the new abilities it will acquire, is appearing in various application areas. It offers suggestions, provides feedback, and indirectly or directly prompts us to take action in every aspect of our lives. Solving business problems in a wide range of sectors, from health diagnoses to financial decisions, education to logistics systems, this technology touches our lives and undeniably offers revolutionary opportunities. For example, it provides a warehouse manager with a summary of which operators are assigned which tasks, and assists operators in executing daily work orders. For a retail company, it identifies which products sell best in which stores and can make automated replenishment decisions to prevent sales losses. In this way, it makes the organization's position within the system more competitive and facilitates efficient management. But what happens when this technology is left unchecked? Ethical collapses, data exploitation, social inequalities, manipulation, and much more…


Eye-level view of a modern workspace with organized tools and resources
Smart systems and invisible chains

Worldwide AI regulations still lack clarity and a consensus. Some countries encourage innovation, while others focus on the risks and adopt a restrictive approach. Essentially, all of this reflects different perspectives on AI across countries. Considering Türkiye's situation within this context, I believe it is at a critical juncture. Due to cultural, economic, and institutional uniqueness, I am of the opinion that directly applying global models may not be the right approach.


This article aims to provide a cautionary resource for policymakers and organizations by defining Türkiye's unique problem in the context of AI regulation, based on a global overview. A lack of regulation will erode trust in AI, lead to data leaks and security issues, and delay its widespread adoption within organizations. Thus, AI could become a national threat rather than an opportunity, because we will not have fully benefited from the leverage of this technology and will not have received a sufficient share of the $15 trillion economy it will create.

Global AI Regulations: Lessons and Contradictions


The world is rapidly taking action in the race to dominate AI, but unfortunately, inconsistencies and uncertainties prevail. As far as I can tell, each country is building different models according to its own economic, cultural, and political realities. To give some examples;

  • The European Union's AI Act is being implemented in stages after entering into force on August 1 , 2024. Prohibited applications have been in effect since February 2025, and general-purpose models since August 2025. The full framework for high-risk systems will begin on August 2, 2026. The EU's approach is strict, risk-based, and prohibitive towards artificial intelligence. Severe penalties are foreseen for non-compliance (35 million euros or 7% of global turnover, whichever is higher).

  • In the US, however, the lack of federal coordination continues. With the Trump-era trend of deregulation, states are leading the way: Colorado's comprehensive AI law will go into effect on February 1, 2026, while states like California will tighten AI transparency and safety regulations in 2026. The US approach exhibits a more liberal and fragmented structure.


  • China, however, is not relinquishing its state-centric control. Amendments to its Cybersecurity Law will take effect on January 1, 2026, increasing regulatory oversight while supporting AI innovation. An algorithm registration system is in place for generative AI, and all major models are subject to state oversight.

  • Looking at other important developments , while the bill No. 2338/2023 approved by the Senate in Brazil is awaiting approval in the Parliament, Singapore is proceeding with sectoral guidelines rather than binding legislation. The Monetary Authority of Singapore's ( MAS ) AI risk management guidelines will become mandatory in the financial sector in 2026.


Now, with so much progress in the field of AI worldwide, how should we summarize the general trend?


From what I've observed, everyone seems to be running in a different direction: the EU in one direction, the US in another, and China in yet another...

These models attempt to balance ethics, safety, and transparency while fostering innovation. While such large economies are moving forward in this way, the issue of accessibility still arises for low-income countries.


Türkiye's Unique Problem: Cultural and Institutional Differentiation


Now I will share my ideas on how Türkiye can differentiate itself with its own unique national perspective. Naturally, Turkey carefully follows and draws inspiration from global trends in AI regulation. Let's not forget that there is one point we need to be careful about: local realities! I believe this requires a fundamental differentiation.


The "Law on Encouraging AI Development and Building Trust" is planned to come into effect in January 2026. As a continuation of the national strategy 2021–2025, the 2026 program views AI as a tool for state capacity building and strategic autonomy.


While AI adoption among the general public in Turkey has reached 60%, corporate adoption is only 7.5% .

This information highlights the need for government support among organizations. Overall, I believe this will unfortunately cause companies to fall behind in the competitive landscape and hinder the development of a skilled workforce.

I can say that Turkey needs a specific AI regulation, stemming from our country's unique cultural and behavioral layers. What do I mean? On the customer side, consumer rights awareness is relatively low, and mechanisms for seeking redress are slow and bureaucratic. Especially among older or low-AI and digital literacy groups, manipulation with AI (e.g., personalized pricing) can be accepted without being noticed. While organizations' behavior in using AI is shaped by the "launch first, then adapt" mentality, they are hesitant to widespread adoption. Businesses, SMEs, and startups focused on rapid growth in this area may push ethical boundaries under performance pressure. Legal and compliance departments are inadequate due to a lack of technological literacy. In most firms, they are non-existent, or even if they are, they are not on their agenda! The fact that digital maturity and data infrastructure are quite weak in organizations is also clearly evident. This situation unfortunately leaves us lacking in terms of fully benefiting from AI by limiting its use.

I have emphasized the need for regulation in the AI field in Türkiye for reasons I have identified as requiring a thorough evaluation. Now, I will explain why a copy-paste EU regulation will not work in Türkiye, based on my findings. I will detail three key points of differentiation.

1. Asymmetry in Organizational Maturity


Businesses in the EU have been working with data for many years. They have developed a wealth of knowledge and a data management culture regarding how to use and protect it. For example, even one organization in Germany is over 90% compliant with GDPR . It's clear that companies have grown with a data protection culture for decades. Compliance departments also have over 15 years of experience. This is why organizations see and embrace regulations not as a "cost," but as a matter of "customer trust."


In contrast, the situation is quite different in Turkey. In Turkey, the level of compliance and awareness regarding the Personal Data Protection Law (KVKK) among institutions is generally considered low. As highlighted in the KVKK's 2024 Annual Activity Information Note , even with intensive awareness and information campaigns, the current level of awareness remains insufficient. Furthermore, we know that while corporate AI usage is 7.5%, individual usage is 60%. It's also clear that most businesses operate on the principle of "market first, then compliance ." Unfortunately, the fact that established compliance departments are young and lack sufficient resources indicates that this culture has not yet matured.

By considering these two facts, I can emphasize the criticality of the issue. Why? The EU's threat of a €300,000 fine deters German companies because they have the compliance infrastructure. Turkish businesses, however, might think , "The risk of being caught is low, the fine will be eroded by inflation," and ignore it. I don't need to comply! What I mean is; a purely penalty-focused model… I don't know, maybe it works partially, but it won't be enough. It seems to me that this approach alone won't work in Turkey. The example I gave about penalties can be multiplied in different areas, but essentially, it needs to be viewed as a system; regulations should be implemented that incorporate a trio of incentives, training, and strict supervision. In other words, regulations should be considered as a whole set of actions that will create a culture and an ecosystem.


2. Consumer Behavior Dynamics


In EU countries, such as France, consumers quickly report GDPR violations to the CNIL (National Commission for Information and Freedoms). The culture of seeking redress is strong, digital literacy levels are relatively high, and processes are relatively fast. This allows for early detection of AI-driven manipulations and swift action with deterrent penalties.


On the other hand, consumer complaint mechanisms in Turkey are bureaucratic and slow, and a significant portion of the elderly population has low digital literacy . According to the We Are Social Digital 2025 Report , while 78.1% of the population lives in urban areas, 21.9% lives in rural areas, and approximately 10 million people in rural areas are still offline due to weak infrastructure, economic conditions, and lack of digital literacy.

These differences mean that while the same AI technologies are balanced with rapid oversight in the EU, in Turkey they can lead to manipulation that goes unnoticed, especially among the elderly, those with low levels of education, and those in rural areas. Therefore, if regulatory design ignores these cultural and structural realities, it may face problems in terms of effectiveness. As can be easily understood from all this information, while the EU assumes "the consumer protects themselves," this assumption seems invalid in Turkey. The state has a necessity to provide proactive protection, especially for the vulnerable groups I have described. Let's not forget the possibility of a situation like this: the elderly receiving automatically high bill increases for AI-based solutions. To prevent this, human consent should be mandatory, etc. I could give many more examples, but I think the point is clear.

3. Economic Dependence and Data Sovereignty

In the EU, global tech giants like Google, Meta, and OpenAI are opening data centers or offering data residency options to comply with GDPR. Examples include OpenAI's data storage in Europe and Google Cloud's EU regions. Given the enormous size of the EU market— with a population of approximately 450 million and a GDP of around $19–20 trillion —companies are willing to accept the high compliance costs and avoid abandoning the market.


The situation in Turkey is quite different. While the Turkish AI market is estimated to be around $2-3 billion in 2025 , the global market is estimated to be between $244-757 billion. This corresponds to a 1-2% share of the market. Companies like OpenAI, Google, or Anthropic do not open dedicated data centers in Turkey; all AI services come from overseas cloud services. The domestic Big Language Model (LLM) is limited, and data processing services are largely imported. This situation leads to the accumulation of Turkish citizens' data on foreign servers and jeopardizes data sovereignty.

The EU's GDPR restrictions on data transfer and pressure for data localization are effective because companies cannot abandon this huge market. Turkey, however, does not have the same "threat" power due to its market size. Strict restrictions that can be imposed through regulation may limit or cut off the services of institutions. Thus, the economy may suffer. Therefore, an incentive-based approach... I'm not entirely sure, but I think this direction would be wiser. At least for a start. I believe that we need to direct institutions towards local solutions, protect data sovereignty, and encourage innovation through mechanisms such as tax breaks for companies using on-premise (closed-loop) AI, and priority in public tenders, all while encouraging domestic LLM investments.

I believe these three fundamental differences will render the "copy-paste" approach to implementing the EU AI Act in Turkey ineffective. While the EU's strict regulations are built on strong consumer awareness and enormous market power, Turkey's realities do not support this model. In the next section, let's examine the potential impacts with examples I've designed. Warning Examples: From Theory to Reality . To ensure these risks don't remain abstract, I will provide illustrative examples as much as possible. This will allow us to concretize how unregulated AI could exploit our cultural and institutional weaknesses in Turkey. As an example, I will consider AI-powered sales systems (in sectors such as telecommunications, e-commerce/retail, and healthcare) creating manipulation through real-time customer analysis. Because performance pressure is high, compliance culture is weak, and consumer rights awareness is low in Turkey, these systems could lead to unethical manipulation. In each scenario I present, I will examine how leaving AI unregulated can lead to individual harm, loss of data sovereignty, and societal inequality. Next, I will examine what regulatory interventions could be effective in these scenarios.


Example 1: Silent Packet Enslavement (Telecommunications Sector) Let's imagine 72-year-old retired Uncle Mehmet living alone in Konya. He calls his telecommunications company complaining about slow internet. The cloud-based AI sales support system in the customer representative's headset (a service offered by a US company) instantly analyzes Uncle Mehmet's profile:


“AI analyzes the following data: Age 72, low digital literacy (only uses WhatsApp), stable income (pension), past behavior (same package for 8 years, never complained).”

Imagine the AI suggests the following information to the representative conducting the interview:


“The customer doesn’t understand the technology and doesn’t object. Upgrade their current 100 TL package to a 400 TL premium fiber + digital TV + international calling package. Emphasize that it’s ‘fast and new technology,’ and get them to commit to a 24-month contract.”

The representative, following an AI scenario, can sell Uncle Mehmet a package full of unnecessary services, even though he only uses WhatsApp. While his previous bill was 100 TL/month, the new bill becomes 400 TL/month. He doesn't use any of the extra services, yet the sale is made at four times the current price!


From another perspective, let's imagine the same AI company also provides services to the three major operators in Turkey. Data from these operators could be used to create a "manipulation map of elderly Turkish consumers ." For example, in Anatolia, the 65+ age segment is 87% sensitive to the words "fiber" and "safe," 23% notice bill increases, and 12% complain. This information could be combined across operators to create a collective behavioral map. Six months later, Uncle Mehmet might be unhappy with his high bill and consider switching to another operator, but the AI of the new operator already knows his profile because it also receives service from the same company. The cycle continues. The extent of the individual economic damage is clear!


Example 2: Hidden Pricing Discrimination (E-commerce/Retail Sector)

Let's consider a profile like Zeynep, a 28-year-old software engineer. She goes to a technology store in Istanbul. The store's cloud-based AI system instantly creates a profile. Using facial recognition, it recognizes Zeynep from a previous visit, identifies eight visits to competing stores in the last three months using mobile location data (purchased from a third-party data broker), performs social media analysis (an Instagram post saying "I want to buy a new laptop"), and uses cross-platform data (she searched for a laptop on an e-commerce site, added it to her cart, but didn't buy it).


The system creates the following profile for Zeynep: Age 28, high income (software engineer), high urgency for purchases, compares competitors, moderate price sensitivity. Let's assume she defines her manipulation strategy as applying pressure through "limited stock" .

Zeynep is looking at MacBooks. The store's digital price tags show different prices for each customer. For Zeynep, the MacBook Pro M3 is 45,000 TL (actual price 42,000 TL), with the message, "Last 2! 10% discount today only: 40,500 TL." For Ahmet, the student next to her, the same product is 38,000 TL, with the message, "Student discount: 35,000 TL." Zeynep buys it for 40,500 TL and is happy, thinking she got a discount. In reality, she paid 5,500 TL more than Ahmet.


Let's assume that the AI company serves not only this store but also other firms in this sector. All of Zeynep's shopping behaviors are combined. The system creates the following profile:


"Apple ecosystem users are premium product-focused, have low price sensitivity, and make impulsive purchases."

A week later, Zeynep goes to another store. YZ (from the same company) at the new store recognizes Zeynep from the previous store and makes the following offer:


“Customer Zeynep bought a MacBook last week. She added AirPods to her cart but didn't pick them up. Today, recommend AirPods Pro Max. Emphasize that they are 'compatible with MacBook'."

Three months later, the AI company's servers are hacked. Among the data sold on the dark web is Zeynep's profile: full name, age, profession, income estimate, address, shopping habits, and social media account. The scammers create a fake credit card. Zeynep is then presented with a fake shopping invoice for a large sum of money.

If the use of biometric data is unregulated and cross-platform data sharing is free… (I could list them here, but I'm tired, I think the point is clear). Encountering these situations is inevitable!


Example 3: Overtreatment Recommendation (Health Consulting / Private Hospitals)


Let's imagine 55-year-old Aunt Ayşe consults a private hospital's online appointment system regarding kidney stones. The cloud-based AI system uses past medical data, demographic information, and voice tone analysis to make the following recommendation to the doctor:


“The patient doesn't understand the technical details and is less likely to complain. Instead of simple lithotripsy, suggest expensive laser surgery + a premium package (check-up, dietitian). Emphasize that it's a 'more definitive solution, appropriate for your age.'"

Aunt Ayşe is being directed to an unnecessary surgery package for a problem that could actually be solved with medication and a simple procedure. When data from other private hospital chains using the same artificial intelligence is combined, patterns of "fear-based sales" in the middle-aged female segment can be learned. Post-operative complications develop. Aunt Ayşe goes into debt, but due to the complexity of complaint mechanisms, she cannot seek redress. Unfortunately, the pressure to increase turnover in the private sector and the tendency of the elderly/rural population to trust these practices facilitate this manipulation.


These examples demonstrate how leaving artificial intelligence unchecked leads to individual (unnecessary debt, health risks, overpayments), societal (increased inequality), and national (loss of data sovereignty) harms. Now, I would like to detail the comprehensive regulatory points that could be effective in these scenarios. I would like to emphasize that these are interventions inspired by the EU AI Act but designed with a specific approach to Türkiye (cultural context, incentive-based).


Türkiye-Specific Regulatory Framework: A Balanced Approach

I believe the three examples I gave demonstrate the multifaceted nature of AI risks. Much more complex situations may arise. I want to emphasize that I've approached this simply. Türkiye needs a balanced framework, not a one-dimensional one. The regulatory interventions I proposed were shaped by drawing inspiration from the EU AI Act and adapting it to Turkish realities. Of course, this is just the beginning. There are many more points. Writing them all down would fill a book, but I believe I've provided the basic framework.

1. On-Premise Imperative and Data Sovereignty

Layer 1: Prohibited Cloud Use (Critical Sectors): Cloud-based AI should be completely prohibited for operations involving sensitive and personal data. For example, on-premise (on-premises servers) use should be mandatory in banking, investment, insurance, public institutions, and healthcare sectors.

Layer 2: Conditional Cloud (Low Risk): Conditional cloud usage may be permitted in specified sectors such as retail and telecommunications, through data anonymization and a contract with the provider (conditions such as x-day retention and non-use in model training).

Layer 3: ON-PREMISE Incentives: For closed-loop models running on servers in Turkey: Incentives such as %X tax reduction, R&D grants, and public procurement points should be provided, usable in all sectors. National large language models (LLMs) should be developed under the coordination of TÜBİTAK (such as FinansGPT-TR, SağlıkGPT-TR). Hosting open-source models in Turkey should be encouraged.

2. Protection of Vulnerable Groups and Human-Cycle Retention

For vulnerable customer groups, for example, if the price increase exceeds a certain level, the AI recommendation should not be applied automatically. Human approval should be required. To give an example; if a 72-year-old man named Mehmet is offered a 400 TL package, another authorized person should call him and ask , "Are you sure you're switching from 100 TL to 400 TL?" If Mehmet says no , the transaction should be cancelled.

3. Restrictions on Biometric Data and Cross-Platform Data Sharing

Facial Recognition Prohibition (Without Explicit Consent): The following information should be provided at the store/hospital entrance: “We use facial recognition technology in our store to provide you with better service. Do you agree? You can say no.” If consent is not given, the system should be disabled. The customer should be treated anonymously.

Cross-Platform Data Sharing Prohibition: Each institution must use its own on-premise system. For example, institution A in the telecommunications sector must use its own AI (Turkish servers), institution B its own AI (Turkish servers), and institution C its own AI (Turkish servers). This prevents data from merging and maintains data silos. It also prevents the creation of a "collective profile of Turkish consumers" using data sent to the same foreign provider .

4. Monitoring Manipulation Tactics and Prohibition of Price Discrimination

Prohibited tactics such as false urgency, fear-based selling, or misleading discounts should be defined in a list, and when the AI suggests these tactics, the system should be automatically blocked and alerts should be sent to the relevant government regulatory agencies.

5. Prompt Data Minimization and Transparency

Automatic Filtering is Mandatory: Personal identifiers must be automatically removed from prompts sent to the AI. For example, instead of “Ahmet Yılmaz, TC: 12345678901, Ankara Çankaya, salary 45,000 TL…”, an anonymized data such as “Customer: 40-45 years old, metropolitan area, income segment 3…” should be used.

User Information Required: At the beginning of each interaction, the following information must be provided: “This consultation is being conducted using an AI-powered assistant. You can choose to continue only with a human representative. Do you agree?” If the customer says “no,” the AI must be deactivated.

6. Periodic Re-certification and Continuous Auditing

Model Drift Risk: AI systems are constantly learning. What is ethical today may learn to be harmful six months later.

Mandatory Re-certification: Independent audits, model bias testing, vulnerable segment exploitation control, and price increase analyses must be conducted every 6 months.

Emergency Response Authority: Regulatory bodies should be able to immediately shut down the system if they detect harmful behavior.

7. Indemnity Fund and Data Breach Insurance

Mandatory Insurance: Every company using AI must have "data breach insurance ." Minimum coverage should be determined per customer.

Automatic Compensation: In the event of manipulation/data breach, the customer should be able to receive compensation quickly without going through a lengthy legal process.

I believe that these seven intervention points, if integrated with regulation, would largely prevent the damage in the example scenarios. I would like to emphasize that these regulations can be improved to include other dimensions such as environmental and energy needs (I haven't even touched on that yet, perhaps in another article).

Conclusion: Before the Window of Opportunity Closes

Let me be clear, Türkiye is currently facing a choice. In the 19th century, countries that failed to industrialize became colonies. In the 20th century, those who didn't invest in technology remained dependent. Now the question is: who will become a digital colony in the 21st century?

The answer is simple: Those who cannot protect their own data. Those who cannot build their own systems. Those who cannot set their own rules.

Right now, your data and my data are accumulating on servers in other countries, but we have no control over them. Simply copying the EU model won't work. We have to play a different game. Incentive, establish a local system, then regulate. Otherwise, it will only be regulation on paper.

"In case of emergency, break glass" is useless after a fire has started. Right now there's smoke. No fire yet! But it's approaching. As Atatürk said:

"The power we need is present in the noble blood flowing in our veins."

We have power. We have knowledge. We have manpower. So let's put our will into it and move forward.

The clock is ticking.

Dr. Şükrü İmre




 
 
© 2026 Şükrü Orcun İmre. All rights reserved.
bottom of page