Google MUM Algorithm 2023 – The Organic Growth Ranch

Welcome to The Organic Growth Ranch season 1, episode 2

The most important takeaway 

This episode will help you find yourself in the current information chaos and learn how to stay relevant during the Google AI revolution, SEO-wise.

The summary

Bartosz Goralewicz speaks with Cindy Krum, the founder of MobileMoxie. They discuss:

  1. The evolution of Google’s AI efforts, starting with the launch of mobile-first indexing in 2017, which paved the way for the development of AI.
  2. Google’s goal to understand and rank all the world’s information using AI.
  3. The significance of BERT (Bidirectional Encoder Representations from Transformers) and its ability to understand the context and meaning of words in queries.
  4. MUM (Multitask Unified Model), which is a thousand times more powerful than BERT and aims to understand various forms of media and anticipate user intent to provide a seamless journey.

Watch the video

Listen to the podcast

Our speakers 

Cindy Krum is a Founder & CEO of MobileMoxie.

>>X (Twitter) >>LinkedIn

Specialties: Mobile SEO, App Marketing & ASO, International ASO, Mobile SaaS Tools/Software, Deep Linking & PWAs

She has been bringing fresh and creative ideas about SEO & ASO to consulting clients and digital marketing stages around the world since 2005. 

In 2022, Cindy was named one of the top 10 most influential SEO’s by The USA Today.

Cindy’s leadership helped MobileMoxie launch the first mobile-focused SEO toolset, to help SEO’s see what actual mobile search results & pages look like from anywhere and preview and analyze mobile landing pages from the perspective of a user and a bot. 

Now, free versions of both the Page-oscope and the SERPerator are also available to all digital marketers as two easy to use Chrome Extensions.

Check the free Extensions:

Page-oscope SERPerator

Bartosz Goralewicz is a Founder & Head of SEO at Onely and Co-founder of ZipTie.dev

>>LinkedIn >>X (Twitter)

He has been a staple in the SEO industry over the last decade as both the co-founder of Elephate (2018’s “Best Small SEO Agency” in Europe) and a thought leader – especially when it comes to all things JavaScript SEO.

In 2019 Bartosz funded Onely – the one and only technical SEO House. Onely’s specialized team works with Fortune 100 companies and other major international brands while continuing to push the envelope in Technical SEO.

Onely has a unique approach to how it works with clients, which is reflected in its one-of-a-kind workflow, transparent price list, and highly detailed reports. On top of that, Onely makes serious efforts to share its knowledge with the industry as a whole, believing that a healthy and positive SEO industry makes for a better internet.

Full transcript

[00:00:02.560] – Bartosz Goralewicz

Hello, everyone. Welcome to Google AI SEO, part two. In this episode, we’re going to go a little bit geekier because I’ve got a very good friend of mine, someone I respect the most in this field, and someone I always go to to bounce a lot of my ideas from, as a guest. She’s one of the smartest SEOs I know. She’s an extreme nerd, just like myself. I apologize for that in advance. Without any further ado, Cindy Krum, from MobileMoxie.

 

[00:00:35.120] – Cindy Krum

Hi. Good to see you.

 

[00:00:36.580] – Bartosz Goralewicz

Hello, Cindy. Cindy is here to help us out understand some of the quite, I think, exciting concepts with Google AI. Just to jump right in, I know we all hate intros, Cindy, the stage is yours. My first question where I think we can just kick it off straight up, I think people know you enough, so we don’t need an extensive intro, but you can find Cindy Krum online, basically everywhere.

 

[00:01:05.820] – Bartosz Goralewicz

But what I wanted to touch on and what I wanted to discuss with you (maybe let’s switch to a wider view) is, Google has a complex romance or history with AI since 2016, 2017, since the change in the leadership of search. I know you’ve been tracking that extensively. How do you think… Because I feel like a lot of people in the audience think, Okay, Google is launching this AI initiative, when it couldn’t be more wrong. Can you give us a bit more of your history, your point of view on this big picture of Google AI.

 

[00:01:43.680] – Cindy Krum

I think ever since the launch of mobile first indexing, which I renamed to “entity first indexing” Google has been making a concerted effort to move towards AI. The first part of that, was the beginning of entity understanding and the creation of the topic layer, which made things entities. It was the idea that a concept in one language is often very similar or exactly the same as that concept in a different language, and that we could learn a lot about the world by comparing notes through translation and understanding that this is the same as this all around the world, or this is mostly the same as this but slightly different around the world.

 

[00:02:41.800] – Cindy Krum

That’s really what it seemed like Google was trying to do, using their strength in English and then applying it to all the other languages that didn’t have as much machine learning capability. Because what you have to understand is that AI needs really strong data to be able to machine learn. The more data, the better, and the faster the machine can learn. What they did was originally, before mobile first indexing, all of the languages were learned and handled to some degree separately.

 

[00:03:18.840] – Cindy Krum

We would learn English and Polish and German, all separately. That was slow, especially in the smaller languages. When they add the topic layer and they connect these concepts that become language agnostic, then there’s one big data stream where they can learn and feed their machine learning and their artificial intelligence all at once. So it can learn much faster connecting things through the topic layer rather than learning one language at a time. Does that make sense?

 

[00:03:52.620] – Bartosz Goralewicz

It does. Could you give us an example? Because the way I understood that, let me say it back to you, is what Google did is, okay, we want to learn all about different entities, what cat is, what dog is, what TV is, in all these different languages. We’re just going to centralize that and use machine learning to not only translate that, but basically, that one big graph of these different entities connected, like how they inter-connect and so on. Is that correct?

 

[00:04:23.820] – Cindy Krum

Yes. Understanding the entities based on their connections. For instance, the simplest example that I always use is “mother.” Even though you might use a different word in a different language to say mother, it means basically the same thing in all of the languages and has the same relationship to father and daughter and grandfather and grandmother. Those relationships are always the same, even though the language might be different. That’s what Google is trying to do. Is to define things and then understand them in relation to other things, and see if that cross-applies and works to all the different languages and validate that. They’ve been really successful, I think.

 

[00:05:07.300] – Bartosz Goralewicz

Okay. Let’s put this on the timeline. Because I think I would like to get into your MUM research, how it starts with BERT, with RankBrain, and so on. When did this start? More or less, obviously.

 

[00:05:27.340] – Cindy Krum

Yeah. I think the launch of mobile first indexing was 2017. It probably started in 2015 or around there as they were preparing for mobile first indexing and trying to figure out what they needed and how they were going to position it and how it was all going to go. So I think it probably started in earnest in around 2015, but I think it’s been part of Google’s goal since the beginning. I saw something that Marie Haynes tweeted out today with a bunch of Google quotes from Sergey Brin and people like that, about AI. They were talking about AI as early as, I think, 2005. It’s always been the plan. It was just how did they get from where they were to this new… Creating the foundation for the AI.

 

[00:06:23.880] – Bartosz Goralewicz

I just wanted to add towards, because we spoke a bit about this in our private conversation before, but in 2017, I was speaking about the RankBrain. When researching that extensively, it was a huge change at Google because in 2016, with, I think, John Giannandrea, I probably butchered his third name, he switched the Google from algo-based to machine learning based, which was a huge and controversial move back then. Back then, they said that by 2024, I had it on my old slide, we spoke about this, that by 2024, Google’s main product will be AI. This is a long initiative. This is not something that started with ChatGPT last year. This is something that’s been building toward for the past eight, ten years or so.

 

[00:07:18.240] – Cindy Krum

Yeah, absolutely. If you think about it, their original goal was to index and rank all the world’s information. Now they’re trying to AI understand and rank and evaluate and synthesize all the world’s information. It’s not a small task. Yes, there’s a lot of stuff hitting the news recently, but it has been ongoing for a very long time.

 

[00:07:47.220] – Bartosz Goralewicz

Just to put a pause on that for a second, because we know now that they’ve built entities. With a lot of their initiatives, they’ve built these entities, so they know that my name and surname is most likely connected to their technical SEO, JavaScript SEO, things like that. Yours is probably very close on this graph to mobile SEO, I would assume. That’s amazing. Now they use that as backend for this SGE layer to show results. There are many steps in between. Before we dive into SGE, I know that they’ve changed some things with RankBrain. Than there was BERT in 2018. But I think you’re the most passionate about MUM, M U M. I wanted to ask you about that as well. Because I think it shows a nice bump on this timeline.

 

[00:08:42.940] – Cindy Krum

MUM came after BERT, and MUM was a thousand times more powerful than BERT. What was powerful about BERT was the bidirectionality and the transformer capability. So it was saying, if this word is first and this word is second, does that change the meaning, and things like that. How are these two words related to each other? MUM is about, it stands for Multitask Unified Model, which doesn’t tell us a lot, but multitask seems to be about either multiple evaluations or multiple media evaluations that come into something.

 

[00:09:28.740] – Cindy Krum

When you think about MUM being a thousand times stronger and more powerful than BERT, what it does, BERT was focused mainly on language, whereas MUM is about everything else, including language, but beyond language. So understanding how images, videos, maps, data sets all relate to an entity. If you think about a lot of the examples that are coming out of Google’s new AI results, they are about this concept of a journey.

 

[00:10:13.580] – Cindy Krum

When you search for this, they’re trying to anticipate, what are you actually looking for, what is your intent. But then also trying to understand, what is the next most likely step in your journey so that they can serve you that prompt even before you’ve asked for it. That’s their goal, really, is to try and get you as much as they can to accommodate your journey on whatever it is. That’s not just text, it might be videos or it might be lots of other things.

 

[00:10:46.820] – Bartosz Goralewicz

Let me say this back to you because it’s a lot of data to digest. With BERT, what happened and correct me if I’m wrong, at any point, with BERT, what happened is Google is now able to understand queries they weren’t able to understand before. There was a number of conversation in the SEO space, thing that I got penalized by BERT, which was a very controversial thing because Google was like, well, it’s impossible, we only changed how we understand queries. We didn’t change how we rank them. That was a huge conversation. I remember that it was a heated one. I was eating popcorn in front of Twitter on a daily basis.

 

[00:11:28.720] – Bartosz Goralewicz

But what BERT actually did, and this is maybe an oversimplification, in BERT, you can understand the difference between Johnny bit the dog and dog bit Johnny. Which previously, maybe it’s an oversimplification, previously it could be considered the same because these are the same words, but it’s a different experience, especially if you’re a dog or Johnny. Now with BERT, Google is okay with this, especially with a lot of I’m assuming drunk searches, let’s call that.

 

[00:12:04.360] – Bartosz Goralewicz

We often search in a chaotic way. Now, it wasn’t the case in 2015. So that was partially better. But now MUM comes in and if you could, from a user’s perspective, try to explain how this has changed with Google having access to, I’m assuming, YouTube video, image, text, maybe even podcast, I hear something about that as well. I know your theories and research. If you could now say with MUM, how did it change Google experience for users?

 

[00:12:38.520] – Cindy Krum

Yeah.

 

[00:12:39.120] – Bartosz Goralewicz

[crosstalk 00:12:40]

 

[00:12:40.460] – Cindy Krum

Let me start with BERT because I like your example, but I think that there’s a different one that I use that might be a little bit more illustrative. Because with Johnny and the dog, those things stay the same. And the biting, who’s doing it, changes, but that’s it. But the example I like to give is red stoplight versus stoplight red. Stoplight Red, when you would search for it, brought up lots of nail polishes and lipsticks. Because “stoplight” was describing “red,” not the other way around. Versus “red stoplight,” you’re thinking of the thing that hangs in the road. It’s helping Google understand what do you really mean and what is modifying what in the query.

 

[00:13:35.460] – Cindy Krum

The biting example, you’re right, it’s a good example, but it’s a bit straightforward. This one get really different results. Nail polishes and lipsticks versus outdoor scenes with traffic signals. That was a big deal. But now with MUM taking that way further, Google is trying to understand and discern your journey. And they call it journey. Even in your history tab in Chrome, they show you groups of topics that they think are related to a journey.

 

[00:14:15.620] – Cindy Krum

The way it’s going to impact search results, I think, or especially the way it’s related to AI and this thing is, Google is constantly ingesting more and more information. Even though we have new limitations coming out on cookies and more supportive privacy, Google wants to be able to model journeys, different journeys related to the same entity. It’s like getting more granular. If we think about the entity, this is the example they use, is the entity of Mount Fuji. It’s a place, it’s a mountain. But there are a lot of things that someone searching for Mount Fuji might want.

 

[00:14:58.340] – Cindy Krum

They might want, if they’re hiking Mount Fuji, they might want maps, weather conditions, and then eventually to buy plane tickets and shop for hiking boots. That’s one potential of journey. But if it’s just a kid writing a research paper on the history of Mount Fuji or the geology of Mount Fuji, that’s a totally different journey. That’s what, I think, Google is trying to do with MUM, is they’re trying to guess the next most likely step in your journey. So sometimes they have to understand more about your journey before they can do that.

 

[00:15:35.220] – Cindy Krum

I think a lot of the filters that we’ve seen added into search results at the top, they’re trying to disambiguate, not the concept of Mount Fuji, but the journey that’s going to be associated with Mount Fuji. Just like when we first started getting knowledge graph, Google was trying to disambiguate those entities that were related but different. Now, they think they understand that really well or well enough that they’re trying to disambiguate intent and journeys. If that makes sense.

 

[00:16:12.060] – Bartosz Goralewicz

Yeah, it does. In that department, I think it’s safe to assume that this was all part of Google’s journey to AI at some point because we have to state that development of part of SGE didn’t start last year. It’s something that was built over the years. It’s a massive project to do. Now, I wanted to get for a second into this gray zone territory of speculation because from what I see is I think, first of all, ChatGPT and Bing got Google a bit rushed with a lot of development. I think a lot of the search queries we see, this is not something I was used to with a lot of Google’s results before.

 

[00:17:05.180] – Bartosz Goralewicz

As an example, we now research a lot of queries that our clients could be looking into if our clients could be affected by them. There are numbers of examples where Google is using SGE for Your Money or Your Life, health, money, finances, which is one. I think this is not really safe at this stage. But also, a lot of results are completely not matching the intent. To give you an example, I think I was searching something along the line of mortgage calculator and I saw instead of the basically calculator, which is the intent, all of the elements that are SGE would generate all of the elements that would be included to calculate my mortgage.

 

[00:17:49.840] – Bartosz Goralewicz

It’s still early stages and now I wanted to ask you, looking at all of these aspects, how do you think they can adjust it to be as smooth of an improvement as BERT or MUM? Do they have to match SGE with what’s already happening? If you were to speculate, how do you think this is going to play out?

 

[00:18:11.900] – Cindy Krum

What they said at Google I/O was that they were slow to market on some of this stuff because they wanted to make sure it was really safe and secure. We know that some of the results coming out of ChatGPT and the other AI systems were scary or tended to go down rabbit holes or be less than ideal. Google was saying, “Oh, well, we’re late to the party because we were trying to make it safer.” They did say that they are likely going to have lots of controls over Your Money or Your Life queries and whether or not the AI result can show up for most users or when and where it’s appropriate.

 

[00:19:04.000] – Cindy Krum

I think what they’re probably going to try and do just to make all of this more scalable and profitable for them, is we’re going to start seeing different ways for them to limit what they have to crawl and index to get a good result. For instance, with YMYL data, Google already has a specific medical knowledge graph that has a bunch of vetted data in it that they use for showing answers. That didn’t come out right when Google started showing answers.

 

[00:19:37.980] – Cindy Krum

That was an addition. But since they have that as a learning set or a basic initial training set of data, that can make things easy where if they have a pre-vetted data set, then the learning is much easier. The problems come in when they try and use all the information that they have from around the world, which may or may not be valid, good, reliable, and try and disambiguate fact from truth. That’s tough. And so I think what we’re going to see is potentially new and different ways for Google to pre-vet the data that it has in a specific learning model.

 

[00:20:22.440] – Cindy Krum

So they can use the health knowledge graph for health kinds of questions and maybe use Merchant Center for e-commerce types of queries. And maybe there will be other things. Maybe we’ll see another pre-vetting for news, which used to exist and then went away or changed.

 

[00:20:41.960] – Cindy Krum

So I think that might be, that’s my speculation on what we’re going to see, is different kinds of pre-vetting systems and potentially APIs that you can be a part of, that you can submit your content through, like Google’s Indexing API was just focused on jobs.

 

[00:21:02.920] – Cindy Krum

Maybe things like that where they can limit the data set to something that’s pre-vetted so that they can make sure that it’s reasonably safe, even if it’s a silly answer. It can be silly, but it can’t be unsafe, or it can be low value potentially, but not unsafe.

 

[00:21:24.960] – Bartosz Goralewicz

I think we need to explain one more thing, because both of us were talking a lot about limiting the data set. I think most of the people I talked to, especially on the client side who are not that deep into the SEO side, are imagining that Google is going to use the whole index to generate these answers. I think this could be more wrong because, first of all, it’s not physically possible with the computing power we have right now and Google is already struggling with rendering JavaScript and a lot of the computing power aspects.

 

[00:22:02.040] – Bartosz Goralewicz

Now, launching a free, first of all, this is not ChatGPT, a free layer would be very costly, but also they have to limit the data set. When you search for what are the best clippers for a dog, we both have this problem, I think, seeing both our dogs. They won’t go through every single clippers and every single e-commerce store. They will go into this graph that we started our conversation from to fetch that info from. Is that how you imagine that as well?

 

[00:22:33.999] – Cindy Krum

Yeah. They’ll find different ways to limit the data training set because otherwise it would just be way too costly. There’s just such a marginal benefit when they know who the authorities are, they can focus there and then not waste so much time and energy crawling stuff that is going to muck up the data set and make everything harder. I think that’s where the fine tuning is going to come in. And now is how do they get really good on certain topics and how do they limit the data set in a way that’s fair and not monopolistic, so that they meet all the EU guidelines.

 

[00:23:13.700] – Cindy Krum

So it will be interesting. But I think that a lot of people forget how resilient users are and Google is when they launch things that mess up. The answers that came out when Google first started to try and use AI about how many legs does a horse have or how many legs does a snake have. Those bad answers were live for at least two weeks, I think, or at least a week where horses had six legs and snakes had four, something like that. They were trying to use AI, but the AI wasn’t smart enough.

 

[00:23:54.600] – Cindy Krum

They were trying to derive answers. It didn’t work. They figured out a new way, which seemed like that was right about the time that they started probably working on passage indexing, what I used to call frags, where they were able to break pieces of a page off and not take one page all at once, but say this is about this, and this is about a different topic, so that they could find the value on a page instead of just looking at a page and getting the broad value.

 

[00:24:27.420] – Bartosz Goralewicz

Again, just to pause that for a second, passage ranking for non-SEOs here is basically where Google is taking one paragraph of your text. Let’s say you have an article about pets. What pets can you possibly have? 3,000 pets. Google can extract just a snake if this is your pet of choice, or just a dog. And just that part of the article, if this is unique value for users that couldn’t be find elsewhere, which I think is going to be heavily used. And Cindy, as well in our previous conversations in the SGE layer because, again, this limits the data enormously.

 

[00:25:08.100] – Bartosz Goralewicz

And if you feed the machine learning or the AI layer, that data with all these other animals, they most likely will be okay filtering this out, but this is a lot of time. Now, going back to the topic of time, everything you said, Cindy, goes on brand with one more thing. There’s one more issue that Google has to solve before pushing this live. Some of the answers take 12 seconds, 15 seconds, 20 seconds to be generated. Again, even with the limited data that they look at, and for some of these, my theories, they only look at the same parts of the content they would use for featured snippet, I know they look at a very small amount of information.

 

[00:25:53.120] – Bartosz Goralewicz

So if you ask them something more complex, you can go to both Bard and SGE and ask, for example, how to fix “crawled currently not indexed”. One of the most popular status in search console over last few months, it will give you a very confusing answer with BERT, or will give you almost exactly the same what you’re going to see in featured snippets in SGE.

 

[00:26:19.040] – Bartosz Goralewicz

We can see that the time limit, the time crunch is real. I wanted to ask you, I think they’ll have to cache a lot of results. The problem that people have with this is going to be extremely personalized. I don’t think it’s something we’re going to see over the next few months.

 

[00:26:34.700] – Cindy Krum

I think you’re right. I think they’ll have to cache a lot of results. But once they’ve gotten a cache of the most common queries and gotten some feedback on what people are interacting with, what they’re not, what they’re finding interesting, what they’re not. Once it’s done the first time, they have a place to start and build out this understanding of the journeys and stuff like that.

 

[00:27:07.060] – Cindy Krum

So I think that there may be an initial time period where they’re building up their cache of the different entities and understandings and relationships, as they can be shown in an AI query. And then they’ll have to introduce something, or maybe they’ll reuse what they already do for the query, deserves freshness, QDF Indexing, where they’ll say, if we’re asking about sports or the weather or politics, these are topics that deserve freshness, and we need to re-process more frequently, versus how to boil an egg, how to make the best banana bread. Those things don’t change all the time.

 

[00:27:47.480] – Cindy Krum

And so they’ll probably have some scoring in their own mind of how often does something need to be re-crawled, re-evaluated. And then maybe there’ll be a threshold of engagement where if there’s a result that they keep showing and has low engagement, maybe they’ll automatically be testing other kinds of solutions, inclusions, whatever.

 

[00:28:14.860] – Cindy Krum

But yeah, I think it’s basically… There was a quote, I can’t remember the exact quote, but when Google launched their first home speakers, someone from Google said outright, all of the people who buy this product are just our guinea pigs and our training data. I think that’s true of most of the things that they launch. They launch things and tell us to play with it. We’re actually just giving them the training data of what-

 

[00:28:43.200] – Bartosz Goralewicz

With authorship and things like that. They remember these old days of tagging every single article with authors.

 

[00:28:50.420] – Cindy Krum

That, too. They use SEOs to mark up data to train their system, but also they just use the first, the early adopters, and their engagement levels, and their queries to understand what are people going to use this for, what are they not, when is it going to go badly or go into a dangerous territory?

 

[00:29:11.780] – Cindy Krum

My first playing around with ChatGPT was very non-search oriented stuff. I was trying to have a conversation. I asked things like, what is the meaning of life, to see what it would come back with and if it had any insights.

 

[00:29:30.600] – Bartosz Goralewicz

You had ChatGPT therapy, Cindy?

 

[00:29:33.060] – Cindy Krum

It was all right. It was much cheaper than regular therapy.

 

[00:29:38.680] – Bartosz Goralewicz

Much cheaper, I was thinking. Cheaper and quicker. Anyhow, I’m wondering just to wrap it up somehow, because first of all, I love having you on this, Cindy, because you’re always very brilliant around these. But I think my personal subjective opinion is that SGE is going to be a very slow roll out, first of all.

 

[00:30:05.160] – Bartosz Goralewicz

Secondly, it’s going to be a revolution for, obviously, everyone knows that already, I think, for top of the funnel, for some of the queries that are not really an in-depth query, so like salaries of a plumber, something like this. It’s very similar to extended featured snippet. This is one. Secondly, all of the queries that are somehow conversational. Again, top of the funnel, I’m looking for a bike and I can ask a lot of questions. But end of the day, the final purchase or the final in-depth interaction is still going to be with the website.

 

[00:30:43.880] – Bartosz Goralewicz

Even the first layer, I think, is going to take a little bit of time since we’ve got people adapting to it. We’ve got propagation of this technology. I think if this would be launched now, it wouldn’t be met with a lot of excitement.

 

[00:30:58.820] – Bartosz Goralewicz

I think I wouldn’t personally use it on a daily basis because it’s, first of all, very slow. Secondly, a lot of the data I get from the SGE layer is very incorrect. My answer to how to fix indexing was the backlinks. You’ve got a lot of these SGEs brainfarts of sorts. This is very limited. But I wanted to ask you just as a final thought, because you’ve been very well to predict what happened with MUM, you’ve been very good with BERT and things like that. How do you think this is going to play out over the next quarter?

 

[00:31:35.620] – Cindy Krum

I understand where you’re coming from saying the rollouts going to be slow. I think that a responsible rollout might be slow, but I think that there might be a lot of incentive to push things out more quickly than with hoping for forgiveness rather than asking for permission.

 

[00:32:00.840] – Cindy Krum

So I think that Google might do a big push. At some point, things will go wrong. It’ll weigh in on religion or weigh in on how many legs snakes have, and they’ll have to pull it back. That’s almost how every algorithm update goes is they launch, they test, they see what’s working, and they leave that, and they pull back on what’s not working. So I think we can fully expect some missteps.

 

[00:32:28.240] – Cindy Krum

And I think it might be… I’m more bullish. I think it’ll launch a little bit faster than it sounds like what you will. But I think that it doesn’t change too much of our day-to-day jobs, except that it’s going to be really important for every SEO to know which of the top keywords they’re driving traffic on, generate these AI results, because I think you’re going to have clicks drop off to zero nearly if there’s a really good AI result.

 

[00:32:54.940] – Cindy Krum

So tools are going to have to get better at showing this thing and showing how many clicks it takes and how much pixel space it takes up, stuff like that. And hopefully Google is planning on adding some kind of reporting on this in search console, because otherwise it’s going to be tough to know at scale if you have a million keywords, which ones generate and which ones don’t.

 

[00:33:24.999] – Cindy Krum

But my personal opinion is there may be a limited value to targeting keywords that bring up the AI result, unless you feel confident that you can become part of the AI result, being number one or number two below it is, it might be really difficult and still get clicks even if you get there. So that’s tough.

 

[00:33:51.840] – Cindy Krum

But in terms of our day-to-day SEO, I think we still have to… There are these filters and Google is using every filter it can to limit what it includes in its training set. And so the filter of relevance is still important. The filters for E-E-A-T, all still important. And anything that doesn’t meet relevance filter is kicked out. Anything that doesn’t meet a quality E-E-A-T filter kicked out. Anything that it can’t meaningfully understand kicked out.

 

[00:34:19.060] – Cindy Krum

Then they have a high-quality training set. You have to start thinking about that. If something’s not being included in the AI and you think it should be, you have to figure out which filter kicked it out of the training set. It’ll be a lot of the same for SEO, but maybe a little bit different, especially in terms of tracking and projecting. If you’re doing any projections for your SEO clients, or if you’re in-house, if you’re saying we’re going to have an awesome 2024 because of these rankings, I would be a little bit tentative and put some asterisks next to that and say, this is what we’re hoping for and AI could change it.

 

[00:35:08.480] – Bartosz Goralewicz

Thank you for that, Cindy. I never would say that to you in other circumstances, but I really hope that you’re wrong because I don’t know which one of us is going in the right direction.

 

[00:35:24.060] – Bartosz Goralewicz

I believe this is reasonable with the panic that we hear about behind the scene is happening at Google… There are a lot of different news that support that messaging. We either look at three months of high chaos before the next quarter. You’ve seen this, right? Or we look at three months of heavy preparations.

 

[00:35:49.640] – Bartosz Goralewicz

Anyhow, we need to shift, we need to adjust. Just wanted to add one more thing in there, because I feel like a lot of people look at that SEO is dead. And there are a lot of very dramatic shifts for some companies where they shift towards, okay, we want to be AI ready, when really to do so, you need to, as you said, what you mentioned is even a bit higher than me as a typical technical SEO would look at. Because if you have indexing issues, crawling issues, server issues, and this is 90% of the websites nowadays with how Google is crawling and indexing, it became way more difficult.

 

[00:36:30.980] – Bartosz Goralewicz

If you have these issues, I don’t really see you in Google SGE. I think that it’s not really the end of Google. I think it got way more complex in a way that you need to have the technical aspect spot on, obviously, to be perceived, a partner for them to be perceived.

 

[00:36:50.440] – Bartosz Goralewicz

But then there’s YMYL, there are all these other aspects that go through these hubs because once they struggle with, one’s probably multiple times, they will struggle with showing bad answers to something that’s really critical. We already saw them advising about all kinds of health issues and all kinds of bad answers. But this is only in the beta.

 

[00:37:13.200] – Bartosz Goralewicz

Once this happens, I’m assuming they’ll be pushing YMYL, and all these different algos heavily for the training data set to be as clean as possible. This is just how I see the next step because they have to clean this up. Obviously, there might be some technology to do fact checking, but I still would think if they do that, they will penalize website that don’t mention the correct data, correct facts.

 

[00:37:42.560] – Cindy Krum

[inaudible 00:37:43] say, well, we won.

 

[00:37:47.620] – Bartosz Goralewicz

You’re cutting off. Can you say it again? The sound was off. If you could say it again. Sorry.

 

[00:37:52.960] – Cindy Krum

If Google were here, they would say, “Well, we don’t penalize it. We just were artificially ranking something that we thought we understood, but now we understand better and that’s not what we want, or we realized that we don’t understand this properly so it can’t rank.” But I think that’s where technology comes into this or the tech SEO aspect of it.

 

[00:38:14.880] – Cindy Krum

I’ve been doing a lot of focusing on semantic understanding based on site architecture and stuff like that, because I think that is important from a technical perspective with, if Google can’t understand your content or is unclear, even if they’re unsure and they’re like, “We think this page is related to this page, but they’re in different directories and they do different things and we’re not sure” then they’re not going to try too hard. They’re just going to give up and go find something that’s easier to understand because that’s better on their data center, it’s easier on their processing, and it exists. There’s no shortage.

 

[00:38:52.060] – Cindy Krum

The problem Google has now is that there’s too much information, not that there’s not enough. And so they don’t have to put in a bunch of effort to get yours because they know there’s probably another one that’s just as good on the next thing that they need to crawl.

 

[00:39:07.860] – Cindy Krum

So that’s the thing is, oh, that’s another filter is like the useful content update. If they think it’s not useful and it’s just another mortgage calculator or another article about the top 10 restaurants in Denver, what makes this one better than the thousands of other ones that they already have?

 

[00:39:29.440] – Bartosz Goralewicz

And we already saw that last October. Last October, real-estate websites got hammered. A lot of websites, if you have a URL’s index that have no user intent that we can match it with, we’re going to wipe it down. Every single website got hit, including John Miller’s websites, our website with archive articles from 2015 or something like this. I think with more and more data, as you’re saying, they will be cleaning up the web and this is going to help them out, help Google out, be faster with finding new valuable data and quickly showing, okay, this is not something we actually need in the index at all. I think you’re spot on.

 

[00:40:09.180] – Cindy Krum

[inaudible 00:40:10] are mostly secure, but it’s the high volume, lower quality copywriting shops that churn out thousands of four paragraph articles that are basically reused on every topical website. The attorney, civil defense attorney or whatever, they all have the same five articles that have been rewritten by this shop and they just tweak them a little bit. That’s going to go away because none of it’s helpful. And it makes it so you don’t have a one page website. That’s great. But it probably isn’t going to show up, or at least it’s definitely not going to be promoted to any kind of the AI-quality content.

 

[00:40:56.800] – Bartosz Goralewicz

Yeah, it’s okay. I think we have to wrap it up because me and Cindy, we could go on about it for three hours more. But I think it’s safe to say as a final sentence, we’re both half-excited and half-terrified, because I, myself, I’m happy to see that a lot of the SEO content, a lot of the stuff that was done just for the SEO’s sake is most likely… We already saw that last year, a little bit. But it’s most likely going to be swept away. I love that.

 

[00:41:27.880] – Bartosz Goralewicz

At the same time, the pace of that, the chaos around that, lack of communication and things like that is something I think we have to get used to. I don’t think it’s going to change dramatically. It’s going to be interesting. I hope this was useful for you folks because this is showing where are we going from. For me, this is pretty nice to see how this might turn out, because this is not something that they’ve done over the last month. We can see that as a continuation of… My focus is out. We can see that as a continuation of the same trend.

 

[00:42:00.440] – Bartosz Goralewicz

Thanks so much, folks, for tuning in. I hope my camera is going to fix. Okay, I’m going to finish out of focus. Thanks so much, folks, for tuning in. We’re going to publish this webinar along with the transcript and links to Cindy’s websites and her SaaS, and her website and her services. Cindy’s an amazing SEO. I cannot recommend her enough. She’s a good friend of mine and an extreme expert. I hope, first of all, to have you in future webinars as well. Thank you so much and thank you so much, folks, for tuning in.

Check other episodes

Google MUM Algorithm 2023 – The Organic Growth Ranch