AI Map — Part 1: The Key Is Not Yours: Closed-Source AI Models

 

This article is the first in a three-part series. The second part will cover open-source server-based models, while the third part will address models that you can download directly to your device and use without an internet connection.

Introduction: Artificial Intelligence Is No Longer an Option

Until a few years ago, artificial intelligence was a concept followed only by technology enthusiasts. Today, the picture is strikingly different: more than 1.35 billion people worldwide actively use artificial intelligence tools, which corresponds to approximately 16% of the global population. So, what kind of artificial intelligence are so many people using, what do they need to use, and what is the difference between them?
This guide answers precisely that question.
When people hear “artificial intelligence,” many think only of ChatGPT. However, this is like saying “animals are big” without seeing an elephant. Today, there are dozens of different artificial intelligence models; each serves a different purpose, is developed by different companies, and can be used under different conditions. Some only work with an internet connection, some come directly to your phone with their code, and some can be accessed through different interfaces with an API key.
In this guide, we explore the artificial intelligence ecosystem in three main branches: closed-source online models, open-source server-based models, and open-source local models. Each branch is further divided based on models, usage purposes, and access methods. My goal is not to lose you in this forest, but rather to clearly show you which tree bears which fruit.

What Are Closed-Source Online Artificial Intelligences?

Closed-source means that how the model works, what data it was trained on, and its internal architecture belong solely to the company that developed it. You only interact with the results. No setup is required to access these models; a web browser or mobile app is sufficient.

1. OpenAI / ChatGPT Family

ChatGPT is the first name that comes to mind for many people when they hear “artificial intelligence.” However, ChatGPT is not just one thing; behind the scenes, there are multiple models serving different purposes.
GPT-5 is the newest and most powerful member of the family. Designed for general-purpose use, it can process text, audio, and visual inputs simultaneously.
GPT-4o remains one of the core options for paid subscribers. It combines text, visual, and audio input under one roof.
o3 and o4-mini were designed with a different mindset. o4-mini excels particularly in math, coding, and visual tasks; it operates with much higher usage limits compared to o3. These models “think” before responding. If you're dealing with a complex math problem or a layered coding question, it makes sense to turn to this series.
GPT-4.1 excels at coding-focused tasks. It outperforms other models, particularly in precise instruction following and web development tasks.
Sora is a model developed for video production that can create realistic and creative video scenes from text descriptions.
GPT Image sets a new standard in visual production. It replaced DALL-E 3 at the beginning of 2025; one of the most notable improvements is its ability to generate text within images much more successfully.
The models released by OpenAI as open source under the name GPT-OSS are discussed in the second article of this series.

2. Google Gemini Family

Google's artificial intelligence ecosystem is much broader than it appears from the outside. “Gemini” is not a single model; it is a family designed for different speeds, capacities, and purposes.
Gemini Pro is the most powerful member of the family; it is designed for complex analytical tasks, deep reasoning, and multi-step problem solving. It stands out from its competitors, particularly in visual understanding and Google search integration.
Gemini Flash is for those who want to work with a focus on speed. It stands out with its built-in tool usage, multimodal generation, and 1 million token context window. Simply put: it can process a large amount of text, audio files, or images simultaneously.
NotebookLM is Gemini's document analysis-focused specialized tool. You can ask questions about uploaded documents, extract summaries, and even listen to summaries in audio format.
Imagen 4 is Google's flagship model for image generation. It is available for general use in Ultra, Standard, and Fast versions.
Veo 3 is Google's answer in video production. It was released as the latest version capable of producing video with audio.
Google's Gemma model family is open-source and is covered in the second article of this series.

3. Anthropic / Claude Family

Anthropic stands out as an AI security-focused research company, and Claude is a product of this approach. Unlike ChatGPT, Claude is not presented as a single model but as a family designed for three different levels of use.
Claude Opus is the most powerful member of the family. Designed for complex reasoning, deep analysis, and long-term tasks, it excels particularly in coding and multi-step problem solving.
Claude Sonnet is a model that strives to strike a balance between speed and power. It produces remarkable results, especially in interface development and coding tasks.
Claude Haiku is the fast and economical member of the family. It is preferred for tasks requiring instant responses and high-volume usage scenarios.
Although Anthropic's models are closed-source, the company stands out in the industry by transparently sharing its security assessments with the public.

4. Microsoft Copilot

Microsoft's AI initiative is not limited to a single product; however, the Copilot brand is at the center of all these products. The most fundamental feature that distinguishes Copilot from other models is its deep integration with Microsoft's own ecosystem: You can find Copilot in Word, Excel, Teams, Outlook, and more.
The GPT-5 infrastructure runs in the background; a fast chat model for simple questions and a deep reasoning model for complex, multi-step tasks. This means users get the best results without noticing which model is being used. In addition, Microsoft has added Anthropic's Claude models to Copilot; users can now choose between OpenAI and Anthropic models for complex research tasks.
Microsoft's Phi model family is open-source and is covered in the second article of this series.

5. xAI / Grok Family

Grok, a product of Elon Musk's xAI company, stands out from its competitors in several ways. The most notable of these is its integration with X: Since Grok can directly access X's social media data stream, it can access current information much faster than its competitors. If you're tracking social media, researching current events, or analyzing trends, this feature is a significant advantage.
Grok 3 was trained on xAI's supercomputer, called Colossus, with ten times more computing power than the previous generation; it produces remarkable results in reasoning, math, and coding tasks. Grok 4, the newest member of the family, has been introduced as xAI's most powerful model to date.
Aurora comes into play for visual generation. Alongside Aurora, which excels at producing photorealistic images and accurately interpreting text prompts, the Grok Imagine feature can also create short animated clips from text input.
You can access Grok via the web interface at grok.com, iOS and Android apps, or with an X Premium subscription.

6. Mistral / Le Chat

Mistral is a company founded in Paris in 2023 that stands out as a “European alternative” in the world of artificial intelligence. With its privacy-focused design and open-source models, it offers a different voice among the American giants. The company's user-facing interface is called Le Chat, which means “cat” in French. Accessible via the web and iOS/Android apps, Le Chat offers most of its features—image uploading, document analysis, web search, and code execution—for free. It stands out from other chat assistants with its response speed of approximately 1,000 words per second.
In the background, Mistral Large handles general chat, Medium ensures balanced performance, and Small enables efficient use. Additionally, task-specific models like Codestral for coding, Voxtral for speech recognition, and Devstral for software engineering are available.
On the privacy front, conversations are not included in model training by default. This is a significant advantage for users in Europe who are sensitive about data sovereignty.
Mistral's models, such as Mistral 7B, Mixtral, Mistral Small, and Devstral, are released as open source under the Apache 2.0 license. This is covered in the second article of this series.

7. DeepSeek

DeepSeek is a company founded in Hangzhou, China in 2023 that made a highly impressive debut in the world of artificial intelligence. In January 2025, its R1 model surpassed ChatGPT in the US iOS App Store to become the most downloaded free app. What makes this even more remarkable is the cost: the company announced that it trained its V3 model for only $6 million, a figure that demonstrates impressive efficiency when compared to the training costs of large American models.
Two main models are offered to users via chat.deepseek.com. DeepSeek-V3 is a general-purpose model designed for everyday use; it is ideal for writing, content creation, and quick questions. DeepSeek-R1, on the other hand, is a model that “thinks” before responding. It outperforms V3 in tasks requiring complex mathematics, deep reasoning, and research synthesis.
However, an important warning is necessary: DeepSeek has been documented to provide responses compliant with censorship policies, and this has led to regulatory scrutiny in multiple countries. If you are sensitive about privacy, we recommend keeping this information in mind.
The DeepSeek-R1 and V3 models are also published as open source under the MIT license. So, when you use chat.deepseek.com, you are interacting with a web application; however, the models behind this application are also publicly available for download. This is covered in the second article of this series.

8. Meta AI

Meta AI stands out from other AI assistants in a fundamental way: while competitors require you to download a new app to access them, Meta AI is already waiting for you within the apps you already use. It is integrated into WhatsApp, Instagram, Facebook, and Messenger, and can also be used with Ray-Ban Meta glasses.
The standalone Meta AI app, announced in 2025, gave the assistant its own platform. Text and voice commands, visual generation, and web search are all offered under one roof. One of its standout features is the “Discover” feed: you can see content your friends have created with Meta AI, like it, and build new content on top of it. The Llama 4 model runs in the background, offering better long-context reasoning, fast responses, and robust multilingual support.
It's important to note that Meta AI is closed-source. Although it uses open-source Llama models in the background, the application itself and its data infrastructure are entirely owned by Meta. In addition, it learns from Facebook and Instagram activities to provide personalized responses — a detail to consider for privacy-conscious users.
Meta's LLaMA model family is open-source. This is covered in the second article of this series.

Conclusion

Closed-source AI systems remain the most common choice in terms of ease of use and access speed. You open an app, type, and get a response. No setup, no technical knowledge required. However, this convenience comes at a cost: you have no say over where your data is processed, which server it is stored on, or how the model is trained.
As we have seen in this section, “artificial intelligence” is not a one-size-fits-all concept. OpenAI generates videos, Google integrates with its search engine, Anthropic prioritizes security, Mistral offers a privacy-focused alternative from Europe, DeepSeek emerges with surprising cost efficiency, and Meta embeds AI into the applications you already use. Each excels in different areas, and each comes at a different cost to you, whether financial or in terms of data.
What about open-source alternatives to these models? AI systems that run on your own server, allow you to access their code, and don't require an internet connection? The answer to this question is in the second part of this series.
 If you liked this article, you might also like my other work:



Comments