Your 30-Day AI Challenge – From Curious to Capable | S6E5
In this episode of the Adventures in CRE Audio Series, the team issues a practical challenge: become AI capable in just 30 days. Sam sets the stage for listeners who feel AI is “too techy” or overwhelming, and Spencer outlines a step-by-step roadmap that anyone—regardless of background—can follow to build real AI fluency. From understanding core concepts like context windows and tokens to experimenting with tools like Vercel’s V0 and custom GPTs, the challenge is all about moving from passive curiosity to confident action.
The 30-day framework includes building for yourself, building for others, and teaching what you’ve learned—unlocking not just proficiency, but also influence. If you’ve been waiting for the right moment to start, this is it.
Watch, listen, or read this episode to get a preview of what’s ahead in Season 6 of the A.CRE Audio Series!
Your 30-Day AI Challenge – From Curious to Capable
Or Listen to this Episode
Resources from this Episode
- A.CRE Real Estate Financial Modeling Career Accelerator
- Education in Real Estate
- A.CRE Audio Series
- Defend your relevance and expand your advantage in CRE. Thrive in the era of AI with AI.Edge: https://theCREedge.AI
Episode Transcript
Sam Carlson (00:08):
All right. I am going to start this one out a little bit different. I don’t know if we’ve ever done this before in any episode, at least any episode I’ve been a part of. Okay. I’m going to issue a challenge, but I’m going to provide a backdrop first.
(00:22):
There is this concept, and whenever something techy or whatever, I feel like tech is the biggest barrier to people throwing their hands up and saying, oh, I can’t do that. You can’t teach an old dog tricks. I’m going to throw that as the backdrop to this challenge that many people in the audience listening right now have listened to this, and they agree, yes, AI is important, but it’s too techy. It’s too techy for me. Even you guys talking about all this stuff, it just feels too techy for me. So I’m going to place a 30 day timeframe on this episode, and I want the collective group of us to map out what every person can in fact do over a 30 day timeframe to become, I don’t know if you call it AI proficient, or it wouldn’t be necessarily AI native
(01:23):
That’s going to take more, but AI capable in A 30 day timeframe.
Spencer Burton (01:28):
What’s cool about this is that the bar is not particularly high yet. So especially in our industry, if you sat 10 people down in a room, if those who are listening did a handful of things, you would be in the top 10% of AI competence. Let’s use that word for this conversation in that room. Okay? So the bar today, now this isn’t always going to be the case, but today the bar is relatively low,
Sam Carlson (01:59):
And so if you want to look like the fancy guy or gal in your office,
Spencer Burton (02:03):
It’s an opportunity for sure,
Sam Carlson (02:05):
And
Spencer Burton (02:05):
It’s not just looking like it. You can take your output and today just simply with the tools that exist today, at least 10 x your output today, and at some point in the very near future, you’re going to be able to a hundred x or even a thousand x your output, which is massive
Michael Belasco (02:21):
To say in 30 days you can 10 x. I mean it’s real.
Sam Carlson (02:25):
So I’ve placed the challenge, what are the ingredients and steps that people need to look at in order to accomplish this?
Spencer Burton (02:35):
Well, so this is one of the challenges with, so there’s no accelerator program for this, and
Sam Carlson (02:44):
There really can’t be because it’s going to change
Spencer Burton (02:47):
Constantly,
Sam Carlson (02:48):
Constantly.
Spencer Burton (02:48):
It’s constantly changed. So you can’t have a static a CR accelerator program for this sort of thing, although there can and should and will be a program to deliver this sort of capability. And when that program happens, now the bar gets raised a bit because the complexity of being that top 10% in the room increases. There’s resources now for that today, there are not really resources. So if you’re listening to this and you have an interest in being that one out of 10 people in the room, who is the most AI competent in your group? Let’s, again, let’s further define AI competent. We’re talking about commercial real estate professionals. We’re not talking about technical people. So this is not becoming a machine learning expert. This is not understanding transformers. This is not understanding the underlying models themselves. You’re not pre-training large language models. What you are doing is you’re understanding the core components of the technology in the same way that none of us in this room really understand electricity, let’s be honest, but I think we all consider ourselves experts. We’re highly confident on using electricity
Michael Belasco (04:02):
With the guardrails in place,
Spencer Burton (04:04):
With the guardrails in place, and we know what not to do and what to do, and we know how to leverage electricity. In fact, Andrew ing the famous, I think he’s at Stanford professor who coined this or made this point that AI will be effectively an electricity, a utility, and I think that’s absolutely true. You don’t have to know how electricity exactly
Sam Carlson (04:29):
Works. You just need to know that you plug the thing in there and then this works
Spencer Burton (04:32):
To be highly competent to be that one out of 10 people in the room. You ought to be able to have some tricks up your sleeve. You ought to know how to use electricity in special ways that others don’t. Or you’re the person that when the electricity goes out, knows where the breaker box is, knows how to flip. Are you either the electrician or
Michael Belasco (04:51):
You can flip the breaker, which is magic in my house, I get out there,
Spencer Burton (04:55):
I’m the man. We all laugh. It’s true. And again, the bar has been sufficiently raised that to be highly confident in electricity, it takes more than just having to know how to flip a breaker. But so you get the analogy. Okay, so at this point in time, June, 2025, how did it become that one out of 10, the bar is relatively low. Here’s what you need to do. So this assumes that you already have your large language model of choice. So you’re using a chat, GPT or a clot or a Gemini, or you’ve locally installed a llama instance and you’re chatting with it, okay? So you have that baseline. You generally understand how to prompt, but now you need to get into, so first there’s a knowledge piece here. So you need to understand a context window, okay? All right. You need to understand tokens. You need to know the difference between 10,000 tokens and a hundred thousand tokens and what 10,000 tokens is. Okay? So you need to know that. So that’s just some basic knowledge. It’s like, okay, I need to understand AC versus DC and one 20 versus two 40. Okay, whoa, slow down. Electric two 20 equals hot tub or a washing machine Dryer. Yeah,
Sam Carlson (06:20):
I don’t really know a hot tub.
Spencer Burton (06:23):
Okay, so that’s the first step. It’s like learn some of these basics. Okay,
Sam Carlson (06:29):
Listen real quick, say I’m again real fast.
Spencer Burton (06:31):
Well, context window tokens, you need to understand the different types of generative models. Actually, you need to understand what artificial intelligence is. So artificial intelligence is in essence, machines doing things that people do. And so we use ai. AI doesn’t mean anything in most of the conversations in 2025, AI refers to generative AI and perhaps physical ai. We talked like that.
Michael Belasco (06:55):
It’s like machine learning. Now it’s moving into this generative ai,
Spencer Burton (07:00):
But it’s understanding what AI is, the different types of artificial intelligence.
Sam Carlson (07:06):
So when AI came around just to, I wanted to get your take on this. I know the answer is what are context windows and all this is Google it, figure it out. I know that. But artificial intelligence, the way I had to explain to me is when it first came around, it was predicting language. And so we all know if I say peanut butter and jelly jelly, that is a pattern that is repeated out there long enough that if you start to model all of human language, those things already exist. And if you give enough compute behind it, it can pretty much take every article written about a particular subject, assemble it, and then write it within different types of language. And that’s kind where it
Spencer Burton (07:54):
Started, and it’s seen enough scenarios, billions and billions of scenarios that it knows that jelly comes after peanut butter. That’s right. I don’t even know for this exercise be that one in the 10, one out of 10 in the room in 2025. And we’re talking about large language models in that context words, there’s other types of generative models. You have text image, you’ve got multimodal where you can do text two, and depending on the context of the prompt, it will produce a video. It’ll produce an image, it’ll produce audio, it’ll produce words. When we say ai, we’re really talking about generative and generative is predictive. It’s what should the next pixel be? If it’s producing an image or what’s the next frame? What should the next series of pixels that make up a frame be? In the case of video in words, what is the next letter,
Michael Belasco (08:48):
Right?
Spencer Burton (08:49):
Well, what’s the next four letters, tokens, approximately Four letters in a word. So some of these basic things. Now, the reason why the baseline is you need to have a large language model is these are, you don’t need to go ask anyone this because you have your own incredibly smart partner here that you can ask these questions to. So I’m now going to give an increasing list of things. You’re going to go, well, how do I do that? You know how to use chat GBT, okay, so you don’t ask. So once I have a baseline of knowledge, and that’s a couple hours of conversations with chat, G-B-T-I-B-M technology has this incredible series of videos on YouTube that are meant for professionals like us. They’re short and they’re really well done. So you find your source that will get you to that baseline of knowledge. That would be day one.
(09:41):
Now you’ve got 29 days left in this. Then it’s about tinkering in June, 2025. It’s about tinkering, A, because the courses don’t exist out there, and B, the courses that do, if there were courses or the courses that do exist, they become obsolete in a matter of months. It reminded me when I was a kid where you needed a new computer every about six months because the technology was moving so fast, and I thankfully had a father who was very much into, Hey, let’s ensure that we have the latest thing. And so every six to 12 months, and we were not a rich family. My dad worked for the government, but he’d find a way to get us a new personal computer, and we constantly tinkered. I just grew up tinkering. Me and my brothers, we just tinkered. And so that’s why this AI thing is so exciting. You need to develop a skill to tinker. Tinkering means, okay, there’s this whole universe of possible AI things. What are they? Google it, okay? This universe of AI things start tinkering with them. Tinkering means I wonder if it could do something that would be of value to me.
(10:49):
So everyone on this listening should know what V zero is. Okay? We go to V zero.dev, launched by a firm called Versal. Versal created something called Next js, which is essentially the engine that powers what we view on the web, for lack of a better term. It’s an open source project. And so, or at least the founder, Versal creates this next JS thing and launches this tool called V zero. And what V zero is, is it is a large language model that produces code that has been tuned to produce code specifically for front end the visual development. And so anyone listening to this can go to V zero dev and type in, I want a website that calculates a value taking net operating income and dividing it by cap rate, and I want it to look like this other website, or I want it to look like this, or, Hey, here’s a picture of what I want it to look like.
(12:05):
Or, Hey, I created a Figma design. I want it to look like that. Whatever you can get, whatever the instructions are, and in a matter of minutes, you can watch it as it writes all the code, and then up will come your website. Okay? That’s an example of a tool, and there are dozens and dozens and dozens of these sorts of tools, and you go, well, what do I use that for? That’s not my job. That’s yours. Okay? You know what you do on a day-to-day basis better than I do, but you have things that you do every day. How might that be useful to you? In a real estate context? There’s things, but it doesn’t even necessarily have to be real estate because it’s about tinkering at this stage. And so what do you find fun? It’s like, I’ve got a fantasy football league, and what would be cool is to display on our own custom web app the results every week, you can build that right Now, someone listening to this on day three of this 30 day thing, using V zero plus a super base backend, you can build that, okay?
(13:04):
You can play with lovable. Lovable is a competitor to V zero. Its capabilities are slightly different, but effectively it’s a competitor. You can build web apps with lovable Bolt News is another one that you can do it. There’s others that are popping up. Now, most of the website builders are having in order to compete or having to come up with their own AI builders. So Framer now has their own AI wire framer. So Framer is a tool to build really cool websites. You can build now a website with framer’s AI tool. So you no longer have to know how to use something like Framer use their AI tool. So the next step is scour the world of AI tools, find five that look interesting to you and begin tinkering with them and create something that is valuable to you and only to you. Don’t worry about anyone else just to you. Okay? So that’s call that from day two to day 15, you’re tinkering. Now, the last 15 days is to build tools that are valuable to others. And why this is important is now you’re thinking beyond your specific use cases. You’re putting yourself in the mind of others, which requires a different level of thinking. You’re also putting yourself now in the role of a teacher because 15 to day 25, you’re building for other people in day 26 to day 30, you’re teaching others how to use it.
Michael Belasco (14:31):
Can I back up and ask you a question about tinkering? All the things you mentioned. So it sounds like the recommendations thus far are build apps, it feels like. So what is that? Like? You go to lovable, you go to, I mean, these are all, I mean, which by the way, we could never do before at the touch of
Spencer Burton (14:52):
A product. Yeah. Well, lemme give you a few other examples. So N eight N is a workflow builder, okay? Now, it’s not exactly generative ai, but you can now integrate AI into that, or Zapier now has the ability to integrate AI into a workflow. Why I like the coding AI is because you’re truly simply prompting an outcomes output, where with something like an N eight N or a Gum Loop or a Lindy, you’re creating automations and they’re valuable. And I’m not saying don’t do that, but it’s not AI in the way that most of us think it’s, most of it’s not generative, most of it is very linear. This happens, then that happens. It’s more like building an Excel model, which is good, but that’s not the next step, the next level of where you’re going. So I would recommend more of an generative experience.
Sam Carlson (15:49):
Can I ask you a question?
Spencer Burton (15:50):
Yeah, sure.
Sam Carlson (15:53):
By the way, I think all the recommendations are great. I’m wondering, in your opinion, before jumping into a lovable or into a vo.dev, does it V zero dev, does it make more sense to start out trying to make a GPT or a gem or a
Spencer Burton (16:13):
Project or something like that? Well, yeah, so that’s why I say with the universe of things that you could tinker with right now, feel endless. And so those are the ones I like to tinker with because you see really amazing things at the end of it. But yeah, you could absolutely build a custom GPT or a clawed project.
Sam Carlson (16:32):
The benefit of building one of those things is if you go through and you’re having conversations with chat GPT and you say, Hey, I want to make A GPT, walk me through the process, it’s going to teach you like, okay, first we need to assemble a knowledge base for this one thing, then we need to define what it is so we can create the instructions. And that’s the prompt. So okay, well, every GBT, if it’s a good one for the most part, has a knowledge base, has a prompt, I put those together and then once I click create, I can do things with it. I did that, for example, I was tired of writing emails for our sales team, so I said, okay, I create one, it says write like Sam and I gave it a Google drive of, I don’t know, maybe 20 or 30 emails I had written. I it define my style. I had to do all these different things and then boom, now I don’t write emails anymore.
Spencer Burton (17:36):
So that is valuable. That is valuable, and I think that would be a worthwhile tool. My recommendation though, if you want to be one of the 10 is, so what you’re doing when you build a custom GPT is you’re essentially building an agent, so you’re providing it unique instructions. We call that the system prompt. You’re giving it some unique knowledge. In the case of a custom GPT, you can upload up to 20 files. You perhaps give it some, they call it in a custom GPT actions. Claude doesn’t have this, as far as I’m aware, where you can connect it to outside resources in the context of the agentic world, that’s called a tool.
Sam Carlson (18:09):
When you say that GPT can do that, but through the API,
Spencer Burton (18:13):
You can give your custom GPT access to third party resources
Sam Carlson (18:18):
You’re talking about,
Spencer Burton (18:19):
They’re called actions in a custom G.
Sam Carlson (18:21):
Give me an example. Like Google Drive or
Spencer Burton (18:23):
Yeah, let’s say that you want your custom GPT to put the output of its results into a Google sheet.
Sam Carlson (18:31):
Got it. Okay.
Spencer Burton (18:32):
So you could connect it to Google Sheet using an action in an AgTech context. Those are called tools.
Sam Carlson (18:37):
Got
Spencer Burton (18:37):
It. Okay. And so that would absolutely be worthwhile. What I would recommend is that you go get a digital Ocean account, it’d actually be cheaper, believe it or not. Go get a digital ocean account and create your first AI agent. It’s the same thing, but it’s a bit more powerful and the tinkering in that, it requires you to maneuver a bit more nuance. And that nuance, you’re going to learn a lot. You’re going to learn more from, you’re in such a guarded environment with a custom GPT that you often lose. You don’t get that instructions are actually feeding a system prompt. When you build an agent, say with digital Ocean, you control the entire system prompt. So you need to do things like, I’m going to put a jailbreak in because I don’t want people to access my knowledge base. I’m going to put some moderating in opening does that for you.
Sam Carlson (19:31):
And if that’s what you can do, and then the way that you figure that out is you have a conversation over here with chat GPU and say, I am doing this thing over and here,
Spencer Burton (19:39):
Walk me, Spencer says, go create an AI agent with Digital Ocean. You don’t need to come and ask me how to do that. You’ve got chat GPT. Go ask it.
Sam Carlson (19:47):
Say,
Spencer Burton (19:47):
Hey, I want to create an AI agent with Digital Ocean and turn on web search because these things are happening so quickly. You want chat GPT to access the web to answer that question, not its own knowledge base, which is outdated. But yeah, you create your first AI agent, it’s incredibly simple. When you do it first, you’re like, there are more to it. No, no, there’s not more to it. But then it’s like, okay, well, I want it to have some special knowledge. Well, okay, well, I need some special knowledge then I need to give it a knowledge base in order to give it a knowledge base. I need to parse the knowledge, put that knowledge in a format that my agent can fully understand. Well, I then go to Chad GBT, Hey, what are my options for this? Well, I could just simply upload it into a knowledge base with Digital Ocean. It’ll index that, but it’s going to index whatever I give. What if I could parse it in a way that would be more readable? And so then all of a sudden I am over at Llama index uploading a PDF file and parsing it in a way to make my output slightly better. And we say, what does pars mean? You say organizing again, in a way for, I thought you were going to say go GBT
Michael Belasco (20:54):
And say, but that’s the thing. So there’s a prompting to it too, which is if you’re building this on digital ocean and you need access to various things, that’s a prompt to chat gt. I need to know what’s the best LLM, this is what I want to do. You’re saying go to LA Index and have it. It’s,
Spencer Burton (21:14):
Yeah. It’s like, okay. And so you’ll get better and better at using chat GPT or any of these larger language models for these sorts of exercises simply by doing it. I will say just for all of
Sam Carlson (21:27):
The can’t treat, teach in old dog new tricks, people, my first things that I created were not good. They were fine. They were impressive. I was like, oh, wow, that’s fancy. That’s really good. But none of them I use today. But now anything that I craze really good, and it all came because you’re
Michael Belasco (21:58):
On your way to your 10,000 hours. Yeah. You just keep tinkering.
Sam Carlson (22:00):
Yeah, 10,000 hours. And I think it’s funny because a lot of these solutions that you’re talking about becoming AI capable or what are we calling it? Is it capable?
Spencer Burton (22:13):
Well, AI native is what we
Sam Carlson (22:15):
Aim, but we’re talking about the next
Spencer Burton (22:17):
30 days. Yeah. One out of the 10 right now is going to be in most groups of 10 in commercial real estate, one out of 10, that’s AI capable.
Sam Carlson (22:25):
You’re going to find out that AI is a lot more intuitive than it is complex, right? Meaning, Hey, I can ask this thing anything. Oh, I’m going to ask it how to do this
Spencer Burton (22:37):
Over. And I assume if they’re listening or watching this, they’re at that. You are absolutely at that level already. And many already may be tinkering. This tutorial, for lack of a better term, is for those who have an interest in becoming the one out of 10 of the room, and if there was a title for this episode, it’d be tinkering your way to mastery of AI or something like that. Because right now we are in a place where that’s how you get there. By the way, when you look at any wave of technology, the people who win are those who were there at the very beginning.
Sam Carlson (23:17):
Yeah.
Spencer Burton (23:17):
First before there were courses, before there were degrees, before, there were experts, and they became experts through tinkering. I love the story of Bill Gates. The reason why he became what he was is he was at one of the only high schools that had a computer.
Michael Belasco (23:36):
No, he went to, I think one of his parents was a perfect, if you getting this wrong at University of Washington. And he would go him and his friends every day after school, I believe.
Spencer Burton (23:47):
So I may be getting it wrong, but the point is, he had access to a computer, but other kids had access to a computer too in his peer group, but he chose to go and tinker every single day after school. He’d go and he tinker. And so we are at a moment right now where those who have the interest in tinkering have a massive advantage.
Michael Belasco (24:08):
Alright, so I’m here. I’m listening to this episode. I got my notebook, my pencil. I’m trying to get through my one through 30 days. Let’s give a recap if we can, of where we started on our journey.
Sam Carlson (24:23):
Wherever you’re watching or if you are watching this, there will be a written recap of it so it can get listed. But go ahead.
Spencer Burton (24:29):
No, and look, this isn’t like a recipe where if you put in a cup of sugar instead of half cup, it turns out wrong, but your first day or two is just get a baseline on the text, the key components
Sam Carlson (24:45):
And get a paid account. Get a paid chat GPT account.
Spencer Burton (24:48):
Well, yeah, that’s kind of obvious. So then call it day three through five is getting an understanding of what tools are out there and beginning just tinkering and finding out which ones are most approachable or intuitive. For me, the next 10 days are about, that is a tinkering phase where nothing works, but you’re trying to build something for yourself. And at the end of the tent day, you’ve built something for yourself that’s cool. May not be useful to anyone else, probably isn’t, but you’ve built something for you that is helpful and hopefully you’ve built several things in different tools so you have some breadth. The next 10 days it gets you through day 25 or building for others. So that’s now me understanding Michael’s requirements. And because I have now a sense of what tools are out there, I can then take Michael’s requirements. I can pick a tool, I could try to build for it, it won’t work. I’ll try a different tool, maybe will work a little bit. I’ll play around. And I also have gone to Sam, and Sam has a different use case because he comes from a different place and I’m building tools for a couple different people. And at the end of then day 25, now it’s my job to come back to you guys and teach you how to build a tool. And the last day is the last five are teaching. And at the end of that 30 day now, I’ve taught you, I’ve tinkered. I built something that’s useful for myself. I built something useful for you, and then I’ve taught you how to build it yourself.
Sam Carlson (26:21):
There is something pretty magical that happens when you teach something. That’s something that not many people talk about, but as soon as you put it out in words for whatever reason, it just connects a lot of dots.
Spencer Burton (26:33):
Well, in that room of 10 people, by the way, first you built a competence, but also the fact that you taught other people in the room how to do that, you now in their are the expert.
Sam Carlson (26:45):
Yeah, you’re the pro. There you go. There’s your 30 day challenge. I think it’s, well, first off, this is a fun episode because if you’re pulled over on the side of the road, you’re like taking notes. That’s totally awesome. I would encourage everybody to go do it after it.
Michael Belasco (27:01):
We’re going to run this transcript through chat, GPT, and then have it produced the best. Yeah,
Sam Carlson (27:07):
That’s right. It’s funny because as just a final little thing, whenever I create a YouTube video, I have this little ad on this plugin in my browser. I click the button, it gives me a summary of the entire episode or video I just shot, and it gives me timestamps. I just copy and paste, put it over in the detail section of the video, and boom, I have chapters that used to take forever. You’d have to be like, okay, here, I talk about this there I talk about this and I put it in this
Michael Belasco (27:37):
Format. I don’t do that. I’ll copy and paste. Boom, done. Remember the first time we did the A CRE audio series when you were record was taking notes and you were
Sam Carlson (27:48):
As
Michael Belasco (27:48):
We were doing
Sam Carlson (27:48):
It. I was like, yeah, because I was like, we’re going to end up recording hours and hours and hours of this. I don’t want to go back. And so I had a notepad. I was like, okay, a minute seven and 36, this thing.
Spencer Burton (28:04):
I do remember that. It’s funny. That was only five years ago. That was only five years ago. Crazy. So imagine where we’re five years from now. Incredible. All right. It’s a good one. See you on the next episode.
Announcer (28:16):
So thanks for tuning into this episode of the Adventures in CE audio series. For show notes and additional resources, head over to www.adventuresincre.com/audio series.