Workshop #4: Give AI the CRE Intelligence It Needs
On a recent live follow-up workshop, we took the Multiplier Framework out of theory and into one of the most important (and most unsolved) problems in CRE AI: data.
Your AI is only as good as what you give it. Without access to real data, it sounds confident, produces something that looks professional, and is often wrong in ways that are hard to catch.
The goal was simple. Show exactly what happens when Claude operates with no data connections, then fix it step by step using real comp databases and live data feeds, and demonstrate the difference side by side.
If you missed the live session, or want a tighter walkthrough, this post gives you:
- The full workshop replay
- A clear breakdown of the Optimal Output Framework: instructions, tools, and data
- A live demo of what raw AI does with a real OM (and why you can’t trust it)
- A walkthrough of building an Airtable comp database and connecting it to Claude via MCP
- An introduction to the A.CRE Intelligence Hub: live rates, demographics, employment, and skills
- A side-by-side comparison of raw AI output versus data-connected AI output
- A link to the Multiplier Framework app (free) and the framework overview
- Pointers to AI.Edge and CRE Agents if you want help learning or building
- You might also enjoy: The previous workshop in this series, covering how to build and share AI skills for CRE: The Multiplier Framework Workshop #3: AI Skills
Watch the Workshop Replay
The Real Problem Isn’t the Prompt
Four years into the AI era, most CRE professionals have gotten reasonably good at prompt engineering. They know how to write a specific instruction, how to give context, how to ask for the format they want. And that is all fine and good.
But the prompt is only one ingredient. In Workshop #4, Spencer introduced the Optimal Output Framework, the same framework taught in the opening lesson of AI.Edge Pro: every AI output is a product of three things. Instructions (the prompt, specific to the task at hand). Tools (the AI’s ability to take action, like writing code, running web search, or editing a file). And knowledge, which splits into two subcategories: skills (stable methodology the AI follows every time) and data (live, deal-specific information that changes constantly).
Workshop #3 tackled skills. This one tackles data, and Spencer made the case directly: right now, data is the single biggest weakness in AI for commercial real estate. The models are powerful. The prompts are getting better. But if the AI does not have access to real, current, structured data, it will confidently produce an output that looks right and is not.
What Raw AI Does With a Real OM
To make the point concrete, Spencer built a hypothetical offering memorandum for Kent Valley Logistics Center, a fictional industrial property in the Kent Valley submarket of King County, Washington. The OM was well-constructed, with a realistic deal profile: a 2019-vintage warehouse, triple net lease, 36-foot clear heights, a 3PL tenant with Home Depot and Lowe’s as end clients.
He then asked Claude, with all connectors disabled, to produce a one-page investment proposal. The prompt was specific: validate the rent and cap rate with comps, analyze 10-mile radius demographics and employment, recommend floating versus fixed rate debt.
What followed is something most CRE professionals will recognize. Claude declared it a great OM, attempted a web search for comps, and surfaced market-level reports from Cushman and CBRE, a cap rate survey, and listings from Loopnet and Commercial Cafe. It found spot SOFR rates but could not access Chatham’s full forward curve, which sits behind a login wall. For demographics, it scraped what it could find, including results from King County government job boards and Ziprecruiter, which are not radius demographic sources. It noted a 10-mile radius analysis had been completed. That was a hallucination.
The output read well. It included rent validation, cap rate commentary, a debt recommendation, and key risks. A first-day analyst might find it reassuring. An experienced underwriter would immediately notice that the rent was flagged as approximately 5% above market, the cap rate was declared in-line, and the demographics section was populated with city-level data dressed up as a radius analysis. None of it was sourced from actual comps or actual data.
The problem is not that Claude failed. The problem is that it did not fail loudly. It hedged with ranges, it sourced from whatever it could reach, and it produced something that looks like analysis. That is what Spencer means when he says AI without data sounds confident and is often wrong.
- You might also enjoy: How the Multiplier Framework applies to underwriting specifically: The Multiplier Framework Workshop #2: Double Underwriting Speed with AI
Fix #1: Build a Comp Database and Connect It via MCP
The first fix was a comp database. Spencer walked through building one in Airtable, a cloud-based database tool that is accessible to non-developers and connects directly to Claude via MCP (Model Context Protocol).
The schema was designed with Claude’s help: Spencer asked Claude what fields an industrial sale comp database should have, refined the schema in a back-and-forth, then handed the final prompt to Airtable’s built-in AI (called Omni) to auto-generate all the columns. The database included fields for address, submarket, building size, year built, clear height, column spacing, sale price per square foot, cap rate, NOI, and buyer and seller identities, among others. Spencer populated it with King County sales data sourced from property tax records (King County is a full-disclosure state, meaning sale prices are public and updated near-daily) and invented NOI figures for the exercise.
With the Airtable MCP connector enabled, Claude could now read, query, and write to the database directly from a conversation. No copying and pasting, no tab-switching. Spencer described what MCP is in plain terms: if you are familiar with an API, it is the same idea, a connection to an external tool, but built on a protocol that is simpler for AI to speak to natively. Airtable has MCP connectors for Claude, ChatGPT, and Manus. OpenClaw users can connect to Airtable the same way.
He then ran the same prompt with Airtable enabled. Claude found five Kent Valley comps, pulled them into a structured table, and came back with a materially different answer. The $13.50/SF asking rent was not 5% above market. It was 19% above the comp average and 12.5% above even the highest-rent comp in the set. The $5.73 cap rate was supported but described as tight. Price per square foot was a premium to every comparable trade. None of that signal existed in the raw AI output.
Spencer also noted something worth paying attention to: because Claude was working from real data, it also knew what was missing. Clear height data had not been populated for the comps, so it flagged that rather than inventing a comparison. That is a useful behavior. AI working from training data or web scraping cannot tell you what it does not have. AI working from a structured database can.
One more feature worth highlighting: Claude can write back to the database. At the end of the session, Spencer asked it to save the subject property as a new rent comp, and it did. That bidirectional flow, pulling comps in and saving new observations back out, is how a comp database grows over time without manual data entry.
Fix #2: The A.CRE Intelligence Hub
The second fix addressed what Airtable cannot provide on its own: market-level data. Demographics, employment, rates, and capital markets are not things most CRE professionals have sitting in their own databases. They are external, constantly updating, and expensive to access through traditional data subscriptions.
Spencer introduced the A.CRE Intelligence Hub, a proprietary MCP server built by the A.CRE and CRE Agents teams. The tech stack is Claude Code for development, MongoDB for data storage, and AWS for hosting. It currently serves four live data feeds:
- Rates and Capital Markets: A 121-period SOFR forward curve, swap rates, Treasury rates, corporate spreads, and loan proceeds modeling.
- Census and Demographics: Radius-based population, income, home value, and rent data at 1, 3, 5, and 10-mile rings, with percentile ranks comparing any given radius to every equivalent radius in the country. The radius calculation is the hard part: census data is reported at the block level, not the radius level, so producing a true radius requires weighting across all census blocks within the ring. That calculation is built in.
- Employment and Labor: Employment data by submarket and radius, including a momentum index and a resilience index.
- Residential Permits: Permit data for demand-side analysis.
The hub also delivers skills built for A.CRE Accelerator curriculum. This is the part of the workshop that connects directly to A.CRE’s longer-term vision for the Accelerator. Spencer described it using the long division analogy: every student should learn to build a direct cap valuation, a DCF model, a waterfall, and a rent roll by hand before using AI to do the same. That foundational knowledge is what lets you validate whether the AI is right. Once you have it, the calculator should follow. What is coming to the Accelerator is exactly that: after completing a lesson, members will be able to perform the same task using AI, governed by a skill written for that exact methodology. The Intelligence Hub is how those skills get delivered.
For AI.Edge Pro members and Accelerator members, access to the hub is available. For those who want to build something similar on their own, Spencer noted that Claude Code and MongoDB are sufficient to replicate the architecture, and anyone on the call could do it.
With the Intelligence Hub enabled alongside Airtable, Spencer ran the same prompt a third time. The output now included three-mile, five-mile, and ten-mile radius demographics with percentile rankings, 11-year population and income growth trends, the full 121-period SOFR forward curve, loan proceeds comparisons for floating versus fixed, and employment data with momentum and resilience indicators. Claude also flagged one gap: retail spending data was not in the hub, so it fell back to web search for that piece and the output was weaker. Spencer called it out directly as the next dataset to add.
Get the A.CRE Intelligence Hub
If this workshop made one thing clear, it is that the data layer is where most CRE AI workflows break down. The A.CRE Intelligence Hub is built specifically to fix that: it gives your AI access to four live primary-source data feeds (rates and capital markets, radius-level demographics, employment, and residential permits) plus 19 expert-written skills that teach it how to do real CRE analysis, from acquisition underwriting to debt modeling to income statement analysis. It connects directly to Claude, ChatGPT, and Manus in about two minutes, and is included with an AI.Edge Pro membership. If you want outputs you can actually send to a client, this is the missing piece.
The Side by Side
The clearest part of the workshop was the comparison. Same OM. Same prompt. Three outputs.
Raw AI said the rent was approximately 5% above market. With real comp data: 19% above the comp average. Raw AI had no comp-level price-per-square-foot data. With Airtable: 14% above every comparable trade in the set. Raw AI described demographics using city-level Kent data. With the hub: true radius data at three rings with percentile ranks against the national distribution. Raw AI found a 2028 SOFR forward rate from a web scrape. With the hub: a 121-point forward curve suitable for structured loan modeling.
Spencer had Claude score both versions of the investment proposal. The raw AI version received a 6.5 out of 10. The data-connected version received an 8.5. The remaining 1.5 points, Claude noted, would require a full DCF with exit scenarios. That is a reasonable answer. The point is not that 8.5 is perfect. It is that 6.5 and 8.5 are different documents for making a real decision.
What This Means for How You Build
The workshop closed with Q&A, and a few answers are worth capturing here.
On proprietary data sources: connecting to a paid data provider requires either an API or an MCP server. Spencer noted that CoStar is unlikely to open either, but providers like Hello Data have open APIs that subscribers can connect directly to their AI. The data landscape in CRE is changing, and the firms and professionals who own their own structured data will have a meaningful advantage.
On building your own intelligence hub: it is more accessible than it sounds. Claude Code can help you scaffold the architecture. MongoDB is the right data store. AWS handles hosting. If you have a use case and want a technical conversation, the CRE Agents team is the right starting point.
On becoming AI-native if you are early in your career: tinker. Pick a tool, build something, check the output, improve it. Spencer gave his new intern a Replit account and a Claude account and said build something, I will look at it tomorrow. That is the fastest path to fluency.
On data governance at scale: Spencer deferred to Ann Hollander as the expert, but the short answer is that AI has lowered the barrier to institutional-grade data architecture so significantly that if your endgame requires it, you might as well start there rather than building a simpler MVP first.
The takeaway from Workshop #4 is direct: the prompt matters, the skill matters, but if the data is not there, you cannot trust the output. Fixing the data problem is not a technical project for the IT team. It is a practical workflow decision that any CRE professional can start on today with a free Airtable account, an MCP connector, and the comps you already have sitting in an Excel file.
- You might also enjoy: The Multiplier Framework overview and free app: Double Your Output with AI in 2026: The Multiplier Framework for CRE
Frequently Asked Questions about The Multiplier Framework Workshop #4: Give AI the CRE Intelligence It Needs
What is the Optimal Output Framework?
The Optimal Output Framework breaks every AI output into three ingredients: instructions (the prompt, specific to the task), tools (the AI’s ability to act on external things like web search or code execution), and knowledge (which includes both skills, stable methodology the AI follows every time, and data, live deal-specific information). Most people focus entirely on the prompt. This workshop focused on data, which Spencer argues is the single biggest gap in CRE AI today.
Why cant I just use raw Claude or ChatGPT for CRE analysis?
Raw AI has two sources of data: its training corpus (which includes Reddit, market reports, and web content of varying quality) and live web search. Neither gives it access to your comps, your actual submarket transactions, radius-level demographics, or a full SOFR forward curve. It will produce outputs that look like analysis and are often directionally wrong in ways that are hard to detect without the real data to compare against.
What is Airtable and why did Spencer use it for comps?
Airtable is a cloud-based database platform that makes production-grade databases accessible to non-developers. Unlike Excel, it can store large amounts of data and query it near-instantly. Spencer chose it for this workshop because it has a native MCP connector to Claude, ChatGPT, and Manus, meaning AI can read, query, and write to the database directly from a conversation without any custom code.
What is MCP and how is it different from an API?
MCP stands for Model Context Protocol. Like an API, it connects AI to an external tool or data source. The difference is that MCP is a protocol specifically designed for AI to speak to tools more easily than a standard API integration. Many platforms, including Airtable, now publish MCP servers that let you connect Claude, ChatGPT, or OpenClaw to your data with minimal setup.
What is the A.CRE Intelligence Hub?
The A.CRE Intelligence Hub is a proprietary MCP server built by A.CRE and CRE Agents. It delivers four live data feeds (rates and capital markets, radius-level census demographics, employment and labor data, and residential permits) plus skills built for the A.CRE Accelerator curriculum. It is available to AI.Edge Pro members and Accelerator members and connects directly to Claude through the connectors menu.
How does the Intelligence Hub produce radius-level demographics?
Census data is reported at the census block level, not the radius level. Producing a true radius requires pulling all census blocks within a given ring and applying a weighting calculation to estimate the value for that area. That process is built into the hub, which is why raw web search cannot replicate it. The hub also returns percentile ranks comparing any given radius to every equivalent radius in the country.
How do I build my own comp database?
Start in Claude. Ask it what fields your comp database should include for your asset type, refine the schema in a back-and-forth, then hand the final prompt to Airtable’s built-in AI (called Omni) to auto-generate the columns. Populate it with your existing Excel comps, data from public records in full-disclosure states, or OMs you have received. Once built, enable the Airtable MCP connector in Claude so AI can query and update it from any conversation.
Can AI write back to my comp database, or only read from it?
Both. With the Airtable MCP connector enabled, Claude can query existing records, analyze them, and save new records back to the database. In the workshop, Spencer asked Claude to save the subject property as a new rent comp after completing the analysis, and it did. This means your database grows automatically as you run deals through it.
What data providers in CRE have open APIs I can connect to AI?
Spencer mentioned Hello Data as a provider with an open API that subscribers can connect directly to their AI. CoStar is unlikely to open its data in this way. For proprietary or paid sources, they need to expose either an API or an MCP server for the connection to work. The data landscape is shifting, and more providers are opening up as AI adoption accelerates.
Where can I go to learn more or get help implementing this?
For community learning and hands-on AI training for CRE professionals, join AI.Edge, the largest community of CRE professionals building with AI. For custom implementation, including connecting AI to your existing data sources and workflows, reach out to the CRE Agents team.


