Productivity Gains and Persistent Limitations by Jonas Bordo In the fall of 2022, OpenAI released ChatGPT and made the power and promise of artificial intelligence (AI) a tangible reality. To date, the impact has been most profound in the area of human productivity. For example, here at Dwellsy, we precisely map every SFR and apartment address that comes into our system (hundreds of thousands per month), and what was once a labor-intensive process is dramatically faster and more accurate with help from AI. Code debugging — one of the most painstaking and unpleasant tasks for our developers — is now (usually) a breeze. Processing emails is dramatically faster with AI reviews and drafts. It is almost as if we have doubled our team size — for only $20 per month per team member in most cases. What AI makes possible with scaled data is nothing short of miraculous. A human can tell in a heartbeat whether or not one single-family home has a garage. But, to look at tens of thousands of them each day and do the same? There is no one human capable of or willing to deliver at that scale, but Large Language Models (LLMs) are happy to take on the task — and they can handle it in minutes and without complaint. LLMs have had a similarly profound impact on our analytical ability. We can feed financial statements into LLMs like ChatGPT, Claude or Gemini and query the financials to get insights in seconds. We can give the LLM an enormous data set and ask for any kind of analysis. They are not always right (like the humans they are built to emulate), but they make an invaluable thought partner in our work. While AI is becoming indispensable, it has serious limitations that are not going away anytime soon. Let’s break them down. Without Good Data, AI is Pointless (or Worse) AI is only as good as the data it uses — the challenge of “garbage in, garbage out” has not changed. Poor-quality data, incomplete datasets, or outdated information can lead to inaccurate predictions and flawed decisions. And SFR is rife with data challenges. Here are some common issues that AI runs into: » Offline Properties // In SFR, many properties exist solely offline — rented via a yard sign and managed in someone’s notebook, or an Excel sheet. As a result, AI will miss many reference properties that could be invaluable for analysis. » Data Fragmentation // Even when digitized, many owners and operators do so on in-house platforms that are not shared, so much of the data is behind enterprise firewalls and inaccessible to AI. » Old Data // SFR evolves rapidly due to factors like new developments, economic shifts, or regulatory changes. Too much data is historical and AI models may rely on old data without factoring in real-time updates. » Bias in Data // Data sets used to train AI models are often not statistically significant. These issues can be as simple as dramatically better data being available in one neighborhood or from one provider, causing that data to overwhelm other, potentially more valuable data, in the AI’s analysis. » Incomplete Data // I have never yet seen a property that is fully digitized. This is doubly the case for SFR properties, which are small in individual scale and highly varied. At best, the core characteristics are captured in the data, but there is always more missing than present in the digital record. Without extensive, representative, timely, and high-quality data inputs, AI is always going to struggle. So as users of these tools, we need to make sure that we can feed it the right data if we want to be able to depend on the outcomes. Missing Character, Intangibility, and Nuance I was first attracted to real estate by its very “real” character. Unlike most financial assets, real estate is a living, breathing thing with character and life all its own. This fact always hits home when I am touring properties, dating back to one of my first. I still remember that feeling of walking into a decrepit property in the northwest side of Chicago and seeing nothing but potential in the well-aged bones of an unusual property located in an edgy, but up-and-coming neighborhood. That very potential — wrapped up in very human concepts like character — is extremely difficult to digitize and, as a result, remains beyond the reach of AI in this space. Here are some examples of the most challenging gaps in understanding character for AI: » Neighborhood Sentiment and Future Growth // AI can analyze current demographic and economic data, but it may struggle to capture the subtle, on-the-ground shifts that can indicate future neighborhood growth. Factors like new businesses, planned infrastructure projects, or changes in community dynamics are much more visible to humans through local knowledge and experience than through data. » Property Condition and Renovation Quality // While AI can estimate the value of renovations or upgrades, it cannot fully evaluate the quality of craftsmanship, the durability of materials, or the aesthetic appeal of the property. Human judgment is crucial in evaluating whether improvements will attract residents or increase the property’s long-term value. » Local Market Nuances // Some SFR markets have hyper-local characteristics that may not be fully captured by data. For example, two neighborhoods within the same city could have vastly different demand characteristics due to local attractions, schools, or even intangible qualities like “curb appeal.” AI models tend to overlook these nuances, relying instead on broad averages. Over-Reliance on Historical Data AI models often depend heavily on historical data to make predictions about future performance. This reliance can be problematic in several ways: » Failure to Account for Disruptions // AI models may not be equipped to predict sudden changes in the real estate market, such as economic downturns, natural disasters, or major regulatory shifts. For example, during the COVID-19 pandemic, could AI models have predicted the spike in demand