,

Becoming AI Native – Mastery, Multipliers, and the Future of Work | S6E3

In this episode of the Adventures in CRE Audio Series, the team explores one of the most important traits for professionals in the AI era: becoming AI native. Spencer defines the term and outlines what sets apart those who will truly thrive—from instinctively using AI tools to knowing which model is best for the job and how to assemble custom solutions. Michael and Sam dig into the balance of art and science, the value of mastery, and how to multiply your output with the right tools and mindset.

Through powerful examples—from CRE software to building an AI tutor based on seven years of Accelerator Q&A—the trio paints a compelling picture of how AI fluency can unlock scale, creativity, and career-defining opportunities. Whether you’re just starting to tinker or already deep in the build phase, this episode is packed with practical takeaways.

Watch, listen, or read this episode to get a preview of what’s ahead in Season 6 of the A.CRE Audio Series!


Becoming AI Native – Mastery, Multipliers, and the Future of Work

Or Listen to this Episode



Resources from this Episode

Episode Transcript

Sam Carlson (00:08):

One of the fun cool things that we get to do when we do these audio series, we get together in person and we get to have a lot of banter and just kind of brainstorming and brain dumping about different things. And one of the things that I heard you say, and I was actually just listening to you talk to one of our interns on a phone call, and you were talking about a term that I had heard, but I didn’t really pay much attention to it until I heard it when you were sharing the context of the word and the word or the framework is, or the phrase is AI native. And I was like, oh, that’s really cool because it describes what you need to aspire to be to win in this. And I was listening to the All-In-One podcast, great podcast. Those guys have some pretty interesting takes on all this stuff, including obviously, I mean, they’re all tech guys, so they’re talking about AI and that term AI native came up and how important it is to aspire to become that. So what does it mean to be AI native? Give me some context here so people that are listening can be like, oh, this is, again, this is visually what I need to be looking at doing.

Spencer Burton (01:30):

Yeah. So let me present a thesis again if I could. This

Sam Carlson (01:34):

Is beautiful,

Spencer Burton (01:36):

And that gets to this concept of someone developing AI native traits. So the thesis is that fundamental change means that certain people will win, and then in fact, there will be a 10 to 20% group of people in our industry who are really going to seize this and win. Okay? Those are the ones who will have the mindset, they’ll develop the right methods, ultimately master these capabilities. And I would describe those as being AI native, those individuals. And the term AI native to me means before I do something, actually it’s not even thought. It’s like instinctual. I go to do something, I grab my favorite AI tool for that specific thing, or I’ve already taught my favorite AI tool to do that thing. And my role is to simply ensure it’s been done and it’s been done right?

(02:43):

That’s

(02:43):

AI native, it’s instinctive. It’s not a, oh, what tool am I supposed to use? It’s just you naturally either grab it and use it or it’s automated in a way that it’s doing it anyway.

Sam Carlson (02:59):

You know how to assemble solutions, right? Because if you’re in a context of, Hey, I want to build something, solve a problem, whatever it is, you identify what the situation is, you’re like, okay, assemble this thing with this, put ’em together, and now I have a way to do this. Now I’ll give an example of watching you, Spencer do some pretty AI native stuff. So yesterday we were just tinkering and you were using two, I think you were using two LLMs and then a builder, and you had the LLMs talking back and forth to each other and then taking their combined output and copying it into a builder. And I was like, I’ve never seen anybody put the two, make this one talk this way and then this one do this one and then make them agree on an architecture that you could then take and put over in this thing.

Spencer Burton (03:54):

And it’s an insufficient solution, but there’s no other better solution basically. So the task is we were building something, one aspect of building that thing required or there was a certain tool that was better for it. And then another aspect there was, and then at the same time, I have my unifying large language model that now has years of knowledge on me. And so there’s also certain things that I want to use it for in order to merge these two things together. Which by the way, now I now have two computers. Sam sees seen my office now I now have two computers.

Sam Carlson (04:34):

Yeah. I go, what’s this computer over here for? It’s like, oh, well, I’m working on this one. And that one’s working by itself.

Spencer Burton (04:41):

Are you serious? Well, and I think many of the listeners might run into this, you do a deep research project takes what, three to five minutes at the same time, you don’t want it to clog up your desktop space. And sometimes if you’re running a model locally, it’s also going to take up memory. And so having a second right now and these things will improve, but right now I find it worthwhile to have two different computers running.

Michael Belasco (05:09):

Workstations. Yeah, workstations. Yeah. So that’s like the advanced mode of

Sam Carlson (05:15):

Honestly, I think maybe advanced, but honestly, once I saw him doing it, I’m like, oh, I could do that.

Michael Belasco (05:20):

Yeah, well it’s being exposed and then seeing that, but you could even take it to, I mean I’m using words that are probably extreme. I guess if we’re saying super advanced and archaic, it’s not archaic, but even understanding how to utilize what you’re doing, the LLMs in the right way right now, that’s basic, right? But that’s not basic for a lot of people doing deep research. And maybe you’re getting this something you need to study and brush up on real quick for students out there. Well, you might not necessarily want to use chat GPTs deep research to help you study, but you do want it to aggregate the information using deep research and then you pull that into a notebook lm. So that’s a very basic form, I would say a rudimentary form of utilizing this. But becoming native is something that

Spencer Burton (06:07):

It’s fully understanding the strengths and weaknesses of this technology that is now your superpower. What’s so cool about this is you all of a sudden have all of the latest, you’re the super spy and you have this lab that’s creating these superpower suits and gadgets that you can use to 100 x your capabilities. That is so real today. But the first step of that is understanding the strengths and weaknesses of the tools that are at your disposal. We have this AI and CRE course that we’ve been putting together for the last year or so, and every month we release a lesson and we can get into why we release a lesson every month. The main reason being these things are changing so fast. He’s like, this needs to be a living course. But several months ago we did this course on open source versus proprietary models,

(07:06):

Alright? And the purpose for the lesson is to understand the difference between open source proprietary, but the case study that we used was test four different models, two, open source, two proprietary. I provided a matrix speed, cost and accuracy. And you rated each one of these and choose the model that you use. And I had a very specific task, which the task was build an automated engine in order to do a pass fail decision for the pass fail decision. When an offering memorandum comes in, it’s like, okay, so let’s feed the offering memorandum to the model and then have it give a pass fail decision and then rate each one of those models. Why this is relevant is okay, that’s cool, but your job as someone who is AI native is to actually understand those and not have to go to someone else to find out which is the best model for the job. If you go to someone else, you’re now just AI

(08:07):

Capable.

(08:09):

AI native means you are actually the one who’s discovering that for yourself.

Michael Belasco (08:15):

The other, it’s interesting to hear this and think about it, there are probably the open source closed source. Each of them had their own way of solving it and it was somewhat sufficient. And then some ways there was one way that was probably superior than the rest. I didn’t go through the study with you, but it’s akin to today. In my head I got this. We were talking about superpowers and you could think of the most powerful superhero, or in my mind I was thinking of, I don’t know if you guys ever watch Rick and Morty where he creates this bot and all his purpose is to serve butter. He was like, you serve butter. He’s like, oh my god. But right now it’s like humanity and it’s not understated. Humanity is at a moment where we are capable and it’s all democratized for the most part. We are capable to build ourselves and the extensions of ourselves into Superman or you can use the tool to serve butter at your 10, but it is all there. And where do you go to figure that out? Because you keep saying this and both of you guys keeps talking about the tinker and tinkering and that’s something I keep

Sam Carlson (09:25):

Well, so you talked about a native, an LLM that you chat GT that you use and that it knows you. One of the coolest things after you use it over time is saying maybe you don’t do a great prompt, you just do kind of like a half whatever prompt you do it and then it puts out this amazing thing. You’re like, well, how did it do that? Because it knows you. And by the way, you also said something that I totally agree with by the way. You’re like, well, there’s potential. We were talking about the difference at that time. By the way, what month is this? This is June, the beginning of June, 2025. This could be depending on when you listen to this, this could be different. Oh, it will be different. This will be different this. But we were talking about Anthropics model in terms of writing and chat.

Spencer Burton (10:15):

Sam asked me, so he’s just watching me. He’s like, well, why don’t you use clo?

Sam Carlson (10:19):

Yeah.

Spencer Burton (10:20):

And what was my answer?

Sam Carlson (10:22):

Your answer was totally right. It was because chat GT has my entire knowledge base knows everything about me, and

Spencer Burton (10:32):

I have a Claude license, a Claude Pro subscription and an API with Claude, and we use Claude, both the CH

Sam Carlson (10:39):

Getting for certain really annoyed because they have M dashes and everything that writes that

Spencer Burton (10:43):

Pt. I know right now in June, 2025, the chat gt, well, G PT four oh loves stuff, dashes. They’re so frustrated. Stuff they can do when they can’t.

Michael Belasco (10:53):

Well that’s how you can immediately tell if somebody’s writing you chat with chat GBT if you see that M dash. But

Spencer Burton (10:58):

Today, look, sonnet four, Claude four, which is sonnet four opus for are superior to really to any other model for certain exercises. So for instance, at CRE agents, we just built this called an agent. It’s like a capability that we have taught this digital coworker. And one of them is this mapping exercise where you map detailed line items to a chart of account, kind of a common thing that we do in real estate. And we were hitting about an 80% success rate, meaning for every line that it would look at, it would be about 80% correct in its mapping, which is still great. I mean it still saves a human enormous amount of time, not to mention all the data cleanup and stuff that it does, but as soon as we introduced or we replaced it, I think we’re using GP PT four one when we initially built the capability or a Claude four came out, and so we didn’t want to use Opus. It’s incredibly expensive. So we use sonnet four, increase it from 80% to 90%

Sam Carlson (12:04):

Just by changing the LL

Spencer Burton (12:05):

Just by changing that. But first off, having all of them as capabilities or be able to power the brain with whatever large language model I think is really important. But just knowing what capabilities are out there, what are the strengths and weaknesses then just make you better.

Sam Carlson (12:19):

That’s

Spencer Burton (12:20):

One aspect of having the trade of being AI native.

Sam Carlson (12:24):

And so the feature we’re going back to is the benefit when you originally or initially start tinkering around with these things and playing with them, you’re learning how to prompt, right? You’re learning how to say it in a way to where you get a good output over time, you’re going to get amazing outputs because your account is going to know about who you are, what you do. In fact, my wife thinks it’s really weird, but I do do this when I’m driving in my car, I will put it on voice and I’ll say, this is what I want you to know about whatever, and I will talk to it for 30, 40 minutes and just tell it stuff because I want it to know those things. And it doesn’t even matter. Sometimes I’m like, man, I’m bouncing all over the place. It doesn’t matter. It can create a customer profile. I do anything.

Michael Belasco (13:19):

But this time last year, that was not possible. That’s how quick the context windows of these LMS have just, that’s what’s so amazing about what’s happening. You have a personal assistant that knows you intimately, arguably better than another human personal assistant. They again, have that massive context with ’em. You can sit there and talk to ’em and I mean anybody that’s using it daily, whatever you’re using, I’ve, I’ve seen, it’s insane.

Sam Carlson (13:44):

It’s happening. So I own a software company and I want to kind of go back to this digital or this native concept and maybe tailor it back to commercial real estate. What I have seen, we obviously, I’ve got software developers and when this thing, when AI got good, it started being able to really do really amazing things with code and it didn’t take very long. Now I can’t write code. I’m not a coder, but even as the person who I’m a founder in the company, I see what’s happening. I started seeing a very quick divide between two things happening. One, the people who used ai, like my coding team and two, who the artists were and then who the regular developers were, and before they kind of worked together as a team and they did stuff as soon as we started saying what they could do, I’m like, that person is an artist or is a craftsman in software and this thing has now made it so they can commit this many lines of code every single day. That person is still like they’re doing a fraction with the same tools, a fraction of the output. And that’s because in code and coding, they’re at different levels of skill and understanding. So one is more AI native because of their skillset and the tools. So how does that work in real estate when you have the person, you have the difference between the art and the science.

Michael Belasco (15:28):

You may not have an answer to this. Is there a defining characteristic or something about one versus the other that stood out? You might not have an answer. I’m just curious there something that stood out to you about nuance. What do you mean by that? Elaborate.

Sam Carlson (15:41):

There’s builders and then there’s building with a purpose. And in the last six months, I have learned more about software because I’ve owned OPEX for four or five years and I now, I do have partners that understand software, so I’m not like this negligent software company owner, but they understood it. They did that software for me exactly, but I couldn’t visually see what we were building before and what got built after night and day difference. You just started seeing

(16:18):

Detail

(16:20):

Coming from the artists that made it is. So the difference between awesome software and then software that’s just built is detail and simplicity. There are people who understand it so well that they know what to remove and what to replace. And there are people who just build everything and they clog it all up and they’re like, well, you need this, you need that, you need this. And you look at the end product and you’re like, how did that person come up with such a more elegant outcome? And that one is, it seems almost Frankenstein.

Michael Belasco (16:55):

I love that you said there’s artists because every medium that you use is a form of art. It is

Sam Carlson (17:00):

Nuance

Michael Belasco (17:01):

Though. But is

Sam Carlson (17:04):

There’s that phrase, there’s the lines read between the lines. It’s the ones that know what goes, how to read between the lines that really makes

Michael Belasco (17:13):

Or know how to prompt, which is something we’ll talk about maybe even in this episode.

Spencer Burton (17:18):

This whole idea of art is interesting. Let me ask a question. So there’s been this rule of 10,000 that as long as I can remember, people have used to describe, or

Sam Carlson (17:31):

At 10,000 hours,

Spencer Burton (17:32):

10,000 hours, the rule of 10,000 hours, you do anything for 10,000 hours. You become Master Michael at Gladwell, Malcolm Gladwell Gladwell, and not Blink, but his whatever the other one was. And so we’ve heard that over and over again. Okay, 10,000 hours you become a master. But what happens when AI can do the same thing on someone’s behalf in 10 seconds, and you’re going to take a little offense to this, but Michael spent his entire life, he’s a master guitar player, a musician, and I can go on Suno and create a four minute song. It’s not

Michael Belasco (18:14):

Like we did on our our

Spencer Burton (18:17):

Retreat. The other about that.

Michael Belasco (18:18):

We’re not talking about that. We should release that to the,

Spencer Burton (18:25):

So I created AI song, I have no musical. Well, I have sunk musical capability at my

Sam Carlson (18:29):

Expense.

Spencer Burton (18:30):

That’s right. That sounds expensive. Lot fun. Totally. And that’s maybe not a great example. That’s a parallel to this, which is okay, is all of us who spent thousands, tens of thousand we’re over, well over 10,000 hours in financial modeling. And at some point we’ll have an AI assistant that will be able to do most of that for us. So does mastery no longer matter?

Michael Belasco (18:54):

I would say mastery will always matter, and that’s a drive of humanity. Before we had the printing press and book writing was an art, right? Creating books was an art, and that went away. People just what they master will shift. And this is actually a part of the story that can be said for some people, but also can recreate the opportunity because there are people that spend 10,000 hours becoming artists and they got a job not for the sake of their art, but to do things such as marketing or that type of creative stuff. That’s an apparent one. I mean that has taken, I’ve seen it take somebody’s job. I mean it’s, there’s not a need for that. So the 10,000 hours there, that creates sort of a, there’s a piece of this that’s like a sad legacy of that. But the 10,000 hours just shifts. If you have, it’s a mindset. 10,000 hours will always matter. And you can look at it as like, I have my opportunity window to do 10,000 hours on a nascent technology. Spencer, you’re the perfect use case. 10,000 hours. AI didn’t just come out. You’re like, ah, cool, I’m going to go sit on the beach. You were like, oh, I have a new opportunity.

Spencer Burton (20:10):

That’s a good point for

Michael Belasco (20:11):

10,000 hours.

Spencer Burton (20:12):

So you’re describing passions and if the mundane, monotonous can be freed up and pushed to these technologies and allow us to spend those 10,000 hours on things that we genuinely enjoy,

Michael Belasco (20:26):

Well it creates things that weren’t previously mundane will just by nature become mundane. And then there’s, that’s interesting.

Sam Carlson (20:34):

So lemme make sure I understand the question. So is the question that because AI in 10 seconds can achieve outcomes outputs that took the normal person 10,000 hours to acquire the skill get then is the 10,000 hours a necessity in the future? Is that what we’re saying? That’s what we’re saying,

Michael Belasco (20:56):

Yeah.

Sam Carlson (20:56):

So it’s interesting. I have two, three competitors in my software. They don’t have the background that me and my team have. And when this AI capability came out, two things happened. One of ’em just poof, poof. They’re there, gone. The other one started building and they built the most complicated thing. We are taking their lunch at every corner because we have spent 10,000 hours plus, I mean way more than that, doing this thing. And we’re building something that creates the actual outcomes where these guys are just building for the sake of building. And so I think it comes to it’s taste, it’s nuance. Really what happens is art turns into scale. Art has generally been something that you can’t scale. That’s the problem is the artist creates something, boom that took them. I mean, have you ever seen some of these drawings that people do by pencil? There’s this guy on social media that draws these things and they look like real in person. And that’s incredible. But imagine that capability if you were to inform an artificial intelligence on how to create outputs that an artist would do, but they’re bespoke to that

(22:25):

Artist.

(22:26):

That’s the potential. You can scale art now,

Michael Belasco (22:28):

But art isn’t done for the purpose of scalability or profit or whatever it is. Art is done for joy. And so art and what we’re talking about are, there’s a nuance different here and we’re talking about 10,000. What you’re talking about is the moats evolve. And so if you’re focusing on 10,000 hours on your subject matter expertise, that’s always going to permeate even if people have access to this other stuff. But art to me is like, and you see it in the most basic music, why is it that? And that’s why there’s a nuance here. Maybe we’re getting a little off track, but artists for joy or when you hear a basic Bob Dylan song versus some of these guys that are coming really complex, he spent 10,000 hours focusing on the message. It’s kind of like you, right? Where these guys who come out and they’re playing this insane all their fingers on the guitar, it’s cool, it’s nice. They really, but it doesn’t hit the soul the same way. And that’s sort of the art piece. But I think the point around the 10,000 hours is what is it that you focus on now that we transition? I think effort significantly matters. I think we’re doing a lot of the talking, but it pertains to this subject matter. This is,

Sam Carlson (23:38):

Well, does it truncate the 10,000 hours? Is the

Spencer Burton (23:41):

No. Well, so this is my view on it by the way. I think the 10,000 hours absolutely matters. And in the point you’re making, which is spot on, maybe you can produce something in 10 seconds, but the quality of what’s produced in 10 seconds still differs from one output to the next. And the AI that is paired with a human who’s truly a master is going to produce an output in that 10 seconds that far exceeds the person who didn’t spend the 10,000 hours to me. And coming back to this whole concept of AI native, our CTO at CRE agents, Gert has this concept of an AI multiplier. And each one of us should ask ourselves, what’s my AI multiplier? And so if this whole concept of I can do something in 10 seconds that used to

Sam Carlson (24:36):

Take me, what is my thing worth multiplying?

Spencer Burton (24:38):

No, no. What is my AI multiplier?

Michael Belasco (24:40):

Who am I and what can I be

Spencer Burton (24:43):

Multiplied? And so if I’m a master and I can produce the highest quality output, but I can do it in a hundred times the speed as I could before, I am now infinitely more valuable than I was before, and I can do so much more. I mean, that’s how an efficiency gain and we get to economics and the benefit of that to me and to society. But just at this level, I still think mastery matters if not more, because there’s going to be so much junk. What’s truly special is going is going to stand out.

Michael Belasco (25:18):

Well, the AI chatbot we introduced to the accelerator that is a multiplier of Spencer and myself and Arturo and everybody that’s helped us along the way. That’s a perfect example of mine. We

Sam Carlson (25:32):

Talk about what went into it.

Michael Belasco (25:34):

You’re talking about

Sam Carlson (25:35):

Years and years. Oh

Spencer Burton (25:36):

Yeah. Thousands of hours, which other people can’t replicate. It’s like, yeah, they can put a bot together, but it doesn’t have that brain.

(25:45):

And

(25:45):

The only way the brain is they answer thousands of questions over years with industry experts who have in a combined 20 years experience in the business or 40 years

Sam Carlson (25:55):

Experience in the business. I think one of the things that the listeners may not know, and this will tie into why this bot is so incredible, is,

Spencer Burton (26:04):

And bot diminishes what it really is. We call it an AI tutor because it’s much more than a bot. But yes, my bad. I misspoke. I

Sam Carlson (26:12):

Was actually going to give you props, but maybe now I won’t. But No, I was just going to say over since 2019, every person who has come into the accelerator has had access to you guys to ask questions. And you guys have given incredible responses that have been logged for, is that six years? Seven. Seven years

Spencer Burton (26:37):

On seventh year.

Sam Carlson (26:38):

So for seven years you have a knowledge base that until recently wasn’t that useful. But now it has seven years of people asking specific real estate financial modeling questions that is now able and curated via a, what did we call it? A tutor

Spencer Burton (26:56):

AI

Sam Carlson (26:56):

Tutor. An AI

Spencer Burton (26:57):

Tutor. Which by the way, you can do with any body of knowledge. Okay, so Sam gave me a great example a few months ago, which was your grandfather had journals.

Sam Carlson (27:11):

Yeah, my grandpa wrote a journal for every decade of his life really?

Spencer Burton (27:15):

So Sam created a knowledge base and he paired it using retrieval augmented generation with a large language model and gave it some system prompt to act somewhat like his grandfather and a way of thinking about it so it doesn’t feel creepy is it is like an advanced search to access his grandfather’s writings. And I think that’s really cool. What a great use case for artificial intelligence and that demonstrates, call it an AI Native Max mindset.

Sam Carlson (27:43):

I was just going to say, if you look at the AI native, I mean we had a journal and I was like, okay, the journal’s cool, but where was he in these different locations? He was in Hawaii at one point in time. He was in Idaho when the Teton Dam broke. And so what I did was I took the journal and I uploaded as a PDF and I said, I want you to take the locations where he was located and create me a second knowledge base on the cultural things that were happening. So World War ii, different things like that. What were the sentiments of the time? And inform the dialogues that I’m going to have with this AI and include those things with him, including his religious convictions, including all these different things and personalities that he expressed in words. But again, between the lines, what was going on contextually?

Michael Belasco (28:40):

See, that’s prompt mastery to have that intuition basically to here’s his journals, I want to learn about it. What is the context around that? You have to go to that next level. I want to talk to your grandfather, but putting in the context of World War ii, the location, and

Sam Carlson (28:58):

I gave it to my mom and I said, what do you think? And after crying for a little bit, she was like, this sounds just like him.

Michael Belasco (29:07):

Was it just writing? And

Sam Carlson (29:09):

I don’t have any audio, it’s just writing, but meaning the audio or I’m sorry, the text that it would put out would sound like it. Wow. That’s incredible. I think AI native, if we were to summarize, there’s the art and science of everything. Science is becoming a multiplication formula that you have not ever seen, ever. And it will multiply the best artists. So if you’re listening to this and hopefully throughout the season you’re to see, sure, there’s two opportunities, one of taking no action and then one of taking action. I’m on the glass half full of AI across the board, increase productivity, increase everything. I think it will magnify, multiply every person who uses it. So hopefully you’re seeing becoming AI native as a becoming native into the artificial intelligence that is available to you today. And depending on when you’re listening to this, that will change. And then B, becoming the best version of who you are and what your role is so that your outputs become the best and your value proposition to your community becomes the best. That’s a

Michael Belasco (30:27):

Good one. Yeah. That is awesome. If we could end it here, I have one more because we’re talking about how you prompted this thing and there’s this concept of just a prompt engineer and communication is the goal here now, which is critical I think now, and how you communicate with it so that you can get this result. And actually, I want to turn this over to you, Spencer, because there is this idea where there’s probably a foundation of coding that’s needed, but prompting and understanding that art and investing that time into that, maybe some advice, guidance or just internal thoughts about this idea of a prompt engineer and what it means and how that’s something that didn’t really exist.

Spencer Burton (31:15):

So first off, everyone now is a prompt engineer who uses a large language model, generative ai. Think of, let’s talk first about a software developer. What is a software developer? They write code. Why do they write code? Because they want a machine to do something

(31:34):

And a machine has a language, and that language is some coding language. And so in order to instruct a machine to do something, it had been necessary to know how to speak this other language. That’s code. And what this technology is unlocked is it makes anyone out capable of speaking to a machine and instructing it to do things using natural language. And so prompt engineering is effectively the same thing as software engineering. I know technically it’s not, but effectively it’s the same thing as software engineering. You now can instruct a machine to do something on your behalf, and that’s pretty cool without having to go, without having to know how to code. Yeah,

Michael Belasco (32:25):

A new, but the way you do, well, it’s funny because you had mentioned before, now you can give haphazard or not really fully baked prompts, and the better it knows you, the better it will

Sam Carlson (32:35):

Spit out. Yeah, I mean, I was really good at prompting, give it a IT context, give it a task, ask any clarifying questions. This was pretty much my prompting sequence for a long time. And then after a while, I just started getting lazy. I’m like, you know what? I want, you’ve talked to me and just do it. And I was like, oh, but I didn’t have to do the whole thing. I knew

Michael Belasco (32:55):

You guys remember, do you remember in elementary school we did this exercise? I remember it where it was like, teach me how to make a peanut butter and jelly sandwich. And so the teacher was at the front of the room. She’s like, okay. And she had her loaf of bread in her bag and her plate and the knife and the peanut butter and the jelly. And she’s like, all right. Jimmy described to me how to make this peanut butter and jelly sandwich, and the first one’s like, you put the peanut butter on the bread, and so she’d scoop the peanut butter maybe with her hands or slap it on the top of the bread, and then no, not like the kids would go crazy. And it was like the whole purpose of the exercise was teaching kids, and this was less important back then, but

Spencer Burton (33:38):

It’s more important now.

Michael Belasco (33:39):

How do you prompt it? It’s like, okay, first undo the zip tie around the bag, put your hand inside the bag, pull out the bread, right? And in the early days of even the LM stuff that the more specific you were, the more important it was. I am curious how that

Spencer Burton (33:54):

Evolved. It’s still important, but Sam’s right, the output is a function of the quality of the input and the input is your prompt plus all the context it has. And so many of us don’t realize that behind the scenes, there’s a lot of context. Context are just words, but in essence, instructions and if you’re using a chat GBT, there’s going to be a proprietary system prompt that OpenAI has injected into it. If you have custom instructions, those are injected into that first prompt. If you have some memory, all of that is injected. And depending on what your prompt is, it might use retrieval, augmented generation to go out and pull in additional resources. Or if you put files, all that’s into that prompt and it’s still, the output is still going to be a function of the quality of the input, the context that goes into the prompt. Sam was just describing now this idea that the more it knows you, the more it has context.

Sam Carlson (34:59):

Yeah. A lot of the things that I’m talking about are, because that’s my role. My role is advertising business strategy, and so I have a knowledge base of conversations that are around that in some way, just over time, I just go to the output. I want, Hey,

Michael Belasco (35:19):

I’m looking to do this, blah, blah, blah, whatever. It’s funny, it’s building, and because I’m having this experience too, because I utilize it for so much, it’s one tool, but it’s building a team of expertise of subject matter experts within it. Like I was sharing with you guys, my example of having a funny subcontractor on our development project who came months after the fact with these unapproved change orders, and I had to go deep with my legal GPT to just navigate that whole situation, but that context lives on, and so it’s weird. It’s fascinating. I’m just kind of providing another anecdote to that.

Sam Carlson (36:00):

Let’s wrap that one. AI native, that’s the punchline, is become AI native. All right, sounds good. See you on the next one.

Announcer (36:10):

Thanks for tuning into this episode of the Adventures and CRE audio series. For show notes and additional resources, head over to www.adventuresandre.com/audio series.


Frequently Asked Questions about S6E3: Becoming AI Native – Mastery, Multipliers, and the Future of Work

According to Spencer, being AI native means that using AI tools becomes instinctual, not just intentional: “Before I do something… I grab my favorite AI tool for that specific thing, or I’ve already taught my favorite AI tool to do that thing.” It’s about having reflexive mastery of tools and knowing how to assemble custom solutions quickly.

Spencer explains that if you rely on others to guide your AI tool usage, you’re merely AI capable. AI native professionals understand tools deeply, test them, and independently choose the right model for the job—similar to how expert software developers pick the right coding libraries.

AI native professionals use tools that augment their work and automate repetitive tasks. Spencer mentions using two laptops—one for him, one for autonomous processes—to run large models concurrently. This mindset allows you to “100x your capabilities” and unlock creative potential at scale.

Spencer introduces the concept of an AI multiplier—a measure of how much more productive you become when paired with AI. “If I’m a master and can produce the highest quality output, but now I do it 100x faster, I am infinitely more valuable than I was before.”

Absolutely. The hosts agree that mastery is more valuable than ever. AI levels the playing field, but those with deep subject matter expertise paired with AI produce superior outputs. “There’s going to be so much junk. What’s truly special is going to stand out,” says Spencer.

Sam explains that tools alone aren’t enough—craftsmanship matters. He compares two developers: one creates elegant software through purposeful design, the other clutters it with unnecessary features. The difference? The first has artistic mastery and deep understanding of what’s essential.

Prompts are the new programming. Spencer says, “Prompt engineering is effectively the same thing as software engineering.” The better you communicate instructions to machines, the more precise and powerful the output. Your clarity and specificity determine success.

Yes. Sam shares how he’s trained AI to understand him deeply: “Sometimes I’ll talk to it for 30 or 40 minutes while driving… It can create a customer profile.” Over time, AI learns user preferences and improves performance through personalized context.

Spencer and Michael built an AI tutor for the Accelerator using a knowledge base of 7 years of Q&A. It answers student questions with expert-level insight. “Others can build bots,” Spencer says, “but they don’t have the brain.” The 10,000 hours of prior work cannot be faked or replicated easily.

Start tinkering. Learn the tools, experiment with prompts, observe strengths/weaknesses, and build custom workflows. As Michael puts it: “We are capable of building the extension of ourselves into Superman… or just a butter-serving robot. The choice is yours.”