How will OpenAI compete?

(ben-evans.com)

69 points | by iamskeole 5 hours ago

20 comments

  • shubhamjain 27 minutes ago
    Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

    My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.

    I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.

    • CharlesW 8 minutes ago
      > Everyone is actually underestimating stickiness.

      I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...

    • SecretDreams 1 minute ago
      How much is your wife paying for the privilege to use OAI presently?
    • pm90 21 minutes ago
      > My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.

      Ads might change that. If we know anything, nobody beats Google with ad based monetization. OAI is absolutely correct to be scared.

    • foogazi 14 minutes ago
      My wife uses Google AI overview - as an extension of search - on a daily basis and then jumps to Gemini
    • morkalork 19 minutes ago
      I commute on the train, I see students studying with it. I go for brunch on the weekend, I see parents consulting it while at the table with their infants. I'm at work, colleagues are using it all day. I leave work and I overhear the random woman smoking in the alleyway talking on her cellphone saying "so I asked chatgpt". It's mind-bogglingly pervasive, the last time something had such a seizmic cultural impact like this was I dunno, Facebook? And secondly, it's all one specific brand. I'm not encountering co-pilot or gemini in the meat-space.
      • boxedemp 4 minutes ago
        My sister uses Gemini and calls it chat gpt. It's becoming a genericide.
      • goolz 10 minutes ago
        How many of those people are paying? I think many say “use ChatGPT” to mean any LLM. As you noted it seems you just see ChatGPT in the wild but that is anecdotal. It is certainly pervasive right now. But I know a lot of people currently switching to Gemini.

        I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).

        Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.

        • morkalork 0 minutes ago
          Is it anecdotal? The observation isn't _my_ experience using it, or of _my friends_. I have no influence over who I see in public using it. I know it's not exactly a scientific study but it's still pretty damn good as a random sample. If I went outside and saw the sky was dark, cloudy and my face got wet, would you tell me it was anecdotal evidence when I say it's raining out?
        • jen20 2 minutes ago
          I actually encountered this today - one of a group I am planning a trip with posted some of the breathless nonsense that ChatGPT produced ("you're not picking a hotel, you're picking a group dynamic..." and other such textual diarrhea).

          It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.

  • ryanlitalien 5 minutes ago
    I speak native English and barebones high school Spanish. I recently visited Costa Rica and almost every time there was a language barrier issue (unknown word or phrase), the local folks opened ChatGPT, said what they were trying to say in Spanish and then had ChatGPT convert it to English. It was everywhere.
    • kshacker 4 minutes ago
      I have done that at my home. My wife calls maids. They are there. I need to go to restroom. Ask my wife. She is struggling to communicate. It took me 3 seconds to realize ChatGPT could help. And it did.
  • throwaway13337 6 minutes ago
    It seems likely that no one will win. And therefore everyone else will win.

    This is because LLMs are commodities. And they destroy the 'power' - as the article puts it - that platforms have over their prisoners.

    These sorts of doom articles are interesting in that they are from the perspective of tech company valuations. Why is this the important perspective?

    For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.

    Maybe no one owns the next platforms.

    Sounds good to me.

  • modeless 10 minutes ago
    > The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable.

    I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.

  • gradus_ad 58 minutes ago
    These very valid points apply to all companies trying to make money off of proprietary models, which means margins are going to collapse in a vicious price war that will make Uber vs Lyft seem tame.

    As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.

    None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.

    • wombatpm 42 minutes ago
      Couple of observations:

      Companies use to hoard talent. Now they are hoarding compute, RAM, and GPUs.

      Deepseek showed that there are possibly less expensive ways to train, meaning the future eye watering expenses may not happen.

      Bigger models may not scale. The future may be federations of smaller expert models. Chat GPTX doesn’t need to know everything about mental health, it just needs to recognize the the Sigmund von Shrink mental health model needs to answer some of my questions.

      • chipgap98 31 minutes ago
        Deepseek showed that distillation is possible. Their results are possible without someone else doing the leading edge training
  • theptip 32 minutes ago
    I think this take underestimates a couple points:

    1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?

    2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.

    I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.

    • arctic-true 4 minutes ago
      To go vertical they’d need to illustrate the value-add, a problem that the vertical competitors already have. Why use Claude for Accountants at $300/month when regular Claude will do the same thing for much less? The stock answer is that Claude for Accountants keeps your data more secure and doesn’t train on it. But a) I think the enterprise consumer is much less likely to trust a model creator not to stick its hand in the cookie jar than a middleman who needs the trust to survive, and b) the vertical competitors typically don’t use the absolute most up-to-date models in their products anyway, so why not just go open-source and run everything in-house? 6 months is a long time in tech, but it’s the blink of an eye in most white-collar professions.
  • rafaelmn 37 minutes ago
    I keep hearing about how the app integrations will be where the AI value is and then I see the actual app integrations and they are between useless and mildly helpful.

    From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.

    Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.

    • edgyquant 22 minutes ago
      Ai porn and ads may be a bigger market than anthropics b2b
      • jjmarr 10 minutes ago
        OpenRouter top apps are 50/50 between AI girlfriends and coding assistants as a general rule.
  • com2kid 54 minutes ago
    Sometimes I like to imagine what this would be like if the technology had appeared 25 years ago.

    First off, nonetheless open publishing stuff. Everything would have been trade secrets.

    Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!

    The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)

    Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.

    It has all been a net good for technological progress but not that good for the companies involved.

    • deepfriedbits 30 minutes ago
      Could they have even trained the models 25 years ago? Wikipedia was nothing close to what it is today and I know folks here like to mourn the fall of the open web, but it's still orders of magnitude larger today than it was in 2001. YouTube, so many information stores that simply didn't exist then.
  • Buttons840 24 minutes ago
    Tech companies are one of the jewels in America's (USA's) crown. If we build a bunch of huge AI companies, rivals will probably continue to release open AI models which undermine the US's influence in the world.
  • sinenomine 1 hour ago
    People underestimate the lead OAI has with their post-5.2 models. The author does not strike me as someone who closely follows the progress frontier labs make in US and around the world.
  • neom 1 hour ago
    Not many folks talking about this: https://www.tomshardware.com/tech-industry/artificial-intell...

    The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?

    • danpalmer 1 hour ago
      Not surprising, Nvidia's margin was just a huge incentive for companies/countries to develop their own solutions. You don't have to be 100% as good if you're 80% cheaper. It's unsurprising that this is being driven by Chinese companies/labs who often have a lot less funding than the US, and the big tech companies (Google, Microsoft, Amazon) who will benefit the most from having their own compute.

      I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.

      • cosmic_cheese 52 minutes ago
        It seems like it’s really only China that’s pursuing the route of doing more with smaller/cheaper models, too, which also has a lot of potential to give the whole bubble a good shake.

        To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.

      • neom 1 hour ago
        Feels a bit crazy saying this but I can imagine a weird future where we have some outlawed Chinese tokens situation under some national security guise. No clue how that would work but nothing surprises me anymore.
        • samrus 53 minutes ago
          Shit is about to get alot more cyberpunk than we're used to, thats for sure
    • nsoonhui 1 hour ago

        it seem they are making good progress on their huawei ascend chips
      
      This is interesting to me. I thought that the reason for deepseek delay was because of the insistence ( by the politicians) to use huawei chip[0]. But that was last year August.

      Anything changes in between?

      [0]: https://www.reuters.com/world/china/deepseeks-launch-new-ai-...

    • re-thc 1 hour ago
      China doesn't need to buy it. They can continue their policy and look good.

      They've already found a better route. Buy it elsewhere e.g. in Singapore. Train their models there using Nvidia hardware.

      Ship the result and fine tune back in China.

      So "China" is and has always been buying it. No difference. The politics can keep raging.

  • re-thc 31 minutes ago
    Wasn't OpenAI's moat buying up all the RAM or Nvidia cards?
  • johnfn 1 hour ago
    This article is significantly better written than most anti-OpenAI/AI articles, and for that I am really grateful. I am generally an AI booster (lol), so I am happy to read well-considered thought pieces from people who disagree with me.

    That being said...

    > The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.

    This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.

    Moving on to another section:

    > If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?

    Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.

    Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.

    • dijksterhuis 56 minutes ago
      > What percentage of Meta's users are paying? Google's?

      The advertiser based business model for those companies makes your question/thought process here problematic for me. Historically speaking Google and "Meta" (Facebook) were primarily advertising provider companies. They provided billboards (space and time on the web page in front of an end-user) to people who were willing to buy tht space and time on the billboard. The "free access" end-users would always end up seeing said billboards, which is how they ended up "paying" for the service.

      So most of Meta/Google end-users were "paying" users. They were being subsidised by the advertising customers paying for the end-users (who were forced to view adverts). The end-users paid with interruption to the service by an advert. [0]

      In that context it feels a little like you're comparing apples to dave's left foot, as OpenAI hasn't had that with advertising ............ historically [1].

      --

      [0]: yes ad-blockers, yes more diverse revenue income streams over the years like with phones, yes this is simplified yadayada

      [1]: excluding government etc. ~bailouts~ investments as not the same as advertising subsidies, but you could argue it's doing the same thing

      • johnfn 38 minutes ago
        Yes -- but both Google and Meta didn't start off as an advertising company - they started off providing a service a lot of people liked, and then eventually added ads to it. My assumption (somewhat implicit, admittedly) is that there's no reason OpenAI couldn't do the same. I can understand why that might be controversial, though.

        But honestly, if OpenAI can't figure out ads given all their data and ability, they deserve to fail. :P

        • chipgap98 27 minutes ago
          But OpenAI has more serious competition than those others did when they were coming up. That puts pressure on them to figure out ads and they dragged their feet getting started
    • wesammikhail 1 hour ago
      > But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.

      The difference is in the unit economics. OpenAI has to spend massively per free user it serves. The others you mentioned have SaaS economics where the marginal cost of onboarding and serving each non-paying user is essentially zero while also gaining money from these free users via advertising. Hence, the free users are actually a net positive rather than an endless money sink.

      Keep also in mind that AI has always been, and will always be, a commodity. The moment you start forcing people to convert into paying customers is the moment they jump ship at scale.

      Just something to keep in mind.

  • system2 22 minutes ago
    This is confirmation bias. HN and other tech people are focusing on the programming aspect of AI more than anything else. The average user does not use it for that, and they don't care. ChatGPT became something like Kleenex.
    • kdheiwns 10 minutes ago
      Kleenex was exactly what I had in mind when reading other comments. And just like Kleenex, where people use whatever tissue they find and forget the word "tissue" even exists, ChatGPT seems to be becoming a genericized term that just means "AI chatbot."
  • d--b 29 minutes ago
    Worth noting that it’s not a winner-takes all situation. There’s definitely space for differentiation.

    Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…

    We have yet to see who’s winning in the “creative space”, probably OpenAI.

    As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.

    • system2 20 minutes ago
      I categorize it like this:

      Claude: Programmers

      ChatGPT: LGBTQ/Liberals, with a lot of censorship

      Grok: Joe Rogan

      • ftchd 12 minutes ago
        DeepSeek: Jìan-Yáng
  • paradox_hash 2 minutes ago
    [dead]
  • boxingdog 1 hour ago
    The main problem with OpenAI/Anthropic is that their only moat is their models, and it has been proven that you can clone a model through distillation. Although the performance is not exactly the same, it gets very close to the original.
  • darig 54 minutes ago
    [dead]