Perplexity: Own LLM?
Perplexity relies on an in-house language model named Sonar. This model is based on open base models from Meta (Llama 3.x) and has been fine-tuned for search. It is not a foundation model from scratch, but an internally trained offshoot with its own infrastructure and search integration. Perplexity also offers third-party models such as GPT, Claude, and Gemini in paid plans.
Perplexity Sonar
Perplexity unveiled 'PPLX Online LLMs' at the end of 2023. These models combined open-source bases (Mistral-7B, Llama2-70B) with a proprietary search, indexing, and crawling stack to substantiate answers with current web sources ( Source). In 2025, 'Sonar' followed as the new in-house model for the standard search mode. According to Perplexity, Sonar is based on Llama 3.3 70B and is optimized for factual accuracy and readability ( Source). In collaboration with Cerebras, Sonar is operated on specialized inference infrastructure, with throughputs of up to 1,200 tokens per second communicated ( Source).
Perplexity pursues multiple motives with its own model based on open source. First, an in-house finetune enables precise adaptation to search tasks, citation logic, and hallucination control ( Source). Second, the dedicated inference infrastructure (Cerebras) optimizes throughput and latency, which is crucial for an 'Answer Engine' product ( Source). Third, Perplexity positions itself by choosing between its own fast, search-based answers and more expensive frontier models, to weigh according to the use case ( Source). A clear 'We have our own model' strengthens the brand without sacrificing the flexibility to call in external top-tier models when needed ( Source).
Quelle: YouTube
Model Ecosystem
The Perplexity platform offers a versatile model ecosystem. Sonar is described as an in-house, search-optimized model based on Llama 3.3 70B and internally further trained ( Source). Historically there were also 'PPLX Online LLMs' that build on Mistral-7B and Llama2-70B and are linked to Perplexity's own web retrieval ( Source). In parallel, in the paid plans Perplexity also offers third-party models such as GPT, Claude and Gemini, which can be selected in the interface ( Source).

Quelle: dhruvirzala.com
The Perplexity Help Center labels Sonar as 'in-house model' and lists available Frontier models in Pro (e.g., GPT, Claude, Gemini) that users can actively choose ( Source, Source).
Facts & Claims
It is documented that Perplexity operates an in-house model with Sonar for the standard search mode, based on Llama 3.3 70B and optimized for factual accuracy, readability, and high speed ( Source, Source). The inference runs on Cerebras infrastructure with communicated 1,200 tokens/s ( Source). Historically, the 'PPLX Online' models derive from Mistral-7B and Llama2-70B and are linked with own retrieval ( Source). Pro subscriptions allow the choice between Sonar and third-party models such as GPT, Claude, or Gemini ( Source, Source).
Perplexity cites agreements that third-party data from Perplexity will not be used for training ( Source).
It remains unclear whether Perplexity will ever train a fully independent Foundation model (without open-source basis); there are no credible announcements. The claim that 'Perplexity uses only GPT/Claude and has no own' is refuted by the Sonar disclosures ( Source, Source).
Data Privacy & Use
Regarding data usage, Perplexity emphasizes contractual assurances to third parties: data from Perplexity should not be used to train external models. There are also opt-out rules for training use ( Source, Source, Source).

Quelle: perplexity.ai
Impact & Open Questions
For research, news and quick orientation, Sonar is usually the pragmatic default: fast, search-based and with source citations ( Source). If long chain reasoning, code assistance or preferred tool calls of a particular Frontier family is needed, manual switching to GPT, Claude or Gemini in Pro is worthwhile ( Source). For data privacy questions, a look into the policy, opt-out and enterprise information is advisable; relevant is the assurance that third-party models do not use Perplexity data for training ( Source, Source, Source).
Quelle: YouTube
Open questions concern the further development of the model family: Will Sonar be expanded in the long term as a multi-tier line (e.g., Pro, Reasoning), and will providers publish credible, independent benchmarks with reproducible methodology ( Source)? How stable are the commitments to data usage in future contracts with third-party providers, and are there external audits or trust-center reports with details ( Source)? What role does 'Deep Research' play for longer, methodological projects and which models are used there by default ( Source)?
The question about the 'own' model can be answered clearly: Perplexity operates with Sonar an in-house fine-tuned LLM on the Llama-3.x base, tailored for fast, cited search answers and supported by specialized inference hardware ( Source, Source). At the same time, the platform remains open to frontier models that can be chosen depending on the task ( Source). This requires a conscious trade-off between speed, cost, depth, and data privacy, as well as a situational model choice.