• Media type: E-Book; Report
  • Title: What should be done about Google's quasi-monopoly in search? Mandatory data sharing versus AI-driven technological competition
  • Contributor: Martens, Bertin [Author]
  • imprint: Brussels: Bruegel, 2023
  • Language: English
  • Keywords: K21 ; digital competition policy ; D43 ; generative AI ; search engines ; answer engines ; access to data ; ChatGPT ; economies of scale andscope in data aggregation ; data governance ; large language models ; chatbots ; D23
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: The first part of this paper focuses on competition between search engines that match user queries with webpages. User welfare, as measured by click-through rates on top-ranked pages, increases when network effects attract more users and generate economies of scale in data aggregation. However, network effects trigger welfare concerns when a search engine reaches a dominant market position. The EU Digital Markets Act (DMA) imposes asymmetric data sharing obligations on very large search engines to facilitate competition from smaller competitors. We conclude from the available empirical literature on search-engine efficiency that asymmetric data sharing may increase competition but may also reduce scale and user welfare, depending on the slope of the search-data learning curve. We propose policy recommendations to reduce tension between competition and welfare, including (a) symmetric data sharing between all search engines irrespective of size, and (b) facilitate user real-time search history and profile-data portability to competing search engines. The second part of the paper focuses on the impact of recent generative AI models, such as Large Language Models (LLMs), chatbots and answer engines, on competition in search markets. LLMs are pre-trained on very large text datasets, prior to usage. They do not depend on user-driven network effects. That avoids winner-takes-all markets. However, high fixed algorithmic learning costs and input markets bottlenecks (webpage indexes, copyright-protected data and hyperscale cloud infrastructure) make entry more difficult. LLMs produce semantic responses (rather than web pages) in response to a query. That reduces cognitive processing costs for users but may also increase ex-post uncertainty about the quality of the output. User responses to this trade-off will determine the degree of substitution or complementarity between search and chatbots. We conclude that, under certain conditions, a competitive chatbot markets could crowd out a monopolistic search engine market and ...
  • Access State: Open Access