Aravind Srinivas is the founder and CEO of Perplexity. We cover the advantages of an AI-powered answer engine over traditional search, the component mechanics used to build in this space, and how Perplexity's business model can compete with the likes of Google and OpenAI.
Principles & Lessons:
1) An “answer engine” is replacing the hack of “10 blue links.” Aravind describes classical search as a compromise, stating “10 blue links was always a hack… we can more or less answer your question directly.” He believes that users want precise results, not exhaustive lists of websites, and envisions “insanely great” experiences where AI even refines the user’s question. “Everybody in the world is curious… not all curiosity can be precisely articulated,” he explains, so the product must do more than just serve links.
2) Speed and accuracy demand significant engineering. Aravind recalls that when Perplexity launched, “the latency for every query was seven seconds… we actually had to speed up the demo video.” Improving speed involved robust infrastructure, better orchestration of multiple APIs, and building out a retrieval pipeline with “a great index.” He’s proud that Perplexity is now “widely regarded as the fastest chatbot out there,” even though it integrates real-time web retrieval.
3) Citing reliable sources helps curb hallucination. Aravind notes that language models “are conditioned to always be helpful,” which leads to invented answers. Perplexity counters this with references, retrieval from quality domains, and reprogramming the model to say “I don’t know” when data is insufficient. “We keep expanding our index… we keep improving the quality,” he says, stressing that focusing on correctness over memorization is pivotal for user trust.
4) Building on external LLMs is only the start. Aravind initially wrapped Bing API and GPT-3.5 to validate demand for Perplexity, admitting “anybody could have done that.” The real challenge is the back-end orchestration so “you can handle any outage in any of these APIs” and scale seamlessly. He observes that even large model providers have separate “mode[s]” for building cost-effective infrastructure, emphasizing, “Serving your own AI models is itself a separate layer of moat.”
5) Subscription is the first step, but monetization remains an open frontier. Early on, Aravind “had no idea of business models” and simply focused on adoption. A monthly plan tested whether Perplexity’s features, not just GPT-4, would compel users to pay. Surprising willingness to subscribe validated the product’s value, though he calls this only a beginning: “We will also work on… advertisement… how would you not compromise the quality of the answer?” He expects multiple revenue streams, including an API for real-time LLM answers.
6) Training small models to reason is a key bottleneck. When describing how to reduce costs, Aravind stresses that “if I can get a [smaller] model to be as good [as GPT-4 at] hallucinations,” the overall economics for search would improve dramatically. He wants models to “train on something else than just memorizing all the words… we want them to be intelligent… to be a good reasoning model.” That shift, combined with synthetic data generation for rapid improvements, is where he sees major gains.
7) Focus and iterative execution trump obsessive strategizing. Aravind recounts constant suggestions to pivot: “You should build something like Character.AI… you should go enterprise… you should do so many things.” Yet he cites discipline, saying “I’m as proud of the things I said no to, as I am to the things I chose to do.” He emphasizes that “startups are all about finding a truth vector,” then scaling fast rather than spreading efforts too thin.
8) Even competitive threats shouldn’t deter you from building what users want. Observing the fear of “no moats,” Aravind notes, “If it is truth and it can generate value, there’s no reason the existing players don’t want to do the same thing.” He sees no easy alternative: “As a startup, your only job is to… deliver value… keep going.” He points out that big incumbents can indeed chase everything, so the path forward is product excellence and relentless execution, rather than overthinking their moves.
Transcript