When I wrote my original post on Google Place Pages I was under incredible time pressure. But I wanted to take another look at what Google has done, because it’s potentially quite significant — although less significant than if Google were indexing these pages in search results (more on that below).
Here’s a search for SOMA, San Francisco — a neighborhood:
Click the map and then “more info” and you arrive at the new page — being created for every business, landmark/POI, neighborhood and city:
The new SOMA Place Page:
What’s there:
- Ads for things related to SOMA (although Bing really isn’t)
- Related maps that can offer a lot of value (created often by users, e.g., “Ben’s Guide to San Francisco“)
- Popular places (being indexed here may be significant for certain venues and attractions)
- Video and images
- Street View
That these pages are engaging and create highly targeted ad inventory should be obvious. They each have a URL but Google told me they won’t be indexed (but see Mike B’s post). That’s probably a political decision to keep people from screaming that Google’s favoring its own properties. The only way then to get to these pages is to click on “more info.” Thus there somewhat buried. But people will probably discover them given the visibility and volumes of searches on Google Maps.
The big departure for Google, beyond the new format, is the creation of these pages for neighborhoods, landmarks and cities. Formerly information was mostly available for business locations. The addition of places and POIs makes these pages a potentially great discovery tool for travel and tourism. Indeed, Hitwise categorizes Google Maps under Travel.
Here’s a comparable page for a local business:
My belief is that increasing numbers of local businesses will claim these pages because they will be visible and widely consulted by consumers (notwithstanding my “buried” remark above). And again they’ll have their own URL so local businesses can link to them.
Imagine if Google were to index these pages in search results; their impact would be huge. But Google is saying it’s not going to. These pages aren’t yet available for mobile devices but some version of them will be in time.
What do you think the impact of these changes will be, if any, on:
- Small busnesses
- Consumer behavior
- Locally targeted AdWords
September 27, 2009 at 3:49 pm
If they were to index these pages in their SERPs, it would be a game changer and a serious blow to directory publishers and other local media players. But because they need those large local media players to sell AdWords, I don’t think they will do it.
September 27, 2009 at 5:22 pm
I think it can be seen as very similar to Seth Godin’s controversial ‘Brands In Public’ – just on a massive scale and with ‘opt-out’ as the default (which is smart). What if they allow businesses – the nominal owner’s of the page – to ‘opt-in’ to having them included in the SERP? In essence, haven’t they created a default web page for everyone who wants it?
I also think the new design makes it much easier for them to (selectively) experiment with different content and see how user’s respond to it — that’s probably another big reason for the re-design. Imagine them showing real-time information in another ‘box’ on the page. This design is much easier to experiment with and optimize.
September 27, 2009 at 5:53 pm
I agree Seb. However that same concern wouldn’t apply to indexing places/POIs/neighborhoods.
September 27, 2009 at 5:57 pm
@Greg With the current Yellow Pages publishers’ commercial mission, I agree. But it definitely would be in their interest to try to structure activity around non-commercial places/POIs/neighborhoods going forward. I wouldn’t leave that space to Google.
September 27, 2009 at 6:06 pm
I may have this wrong but see my post today: Where are Google Places Pages Going?. It appears to me that Google may be indexing them…
September 27, 2009 at 6:10 pm
Mike: That’s interesting because I asked them lots of questions about whether they were going to index and they said the pages would only be accessible via Maps.
September 27, 2009 at 6:11 pm
@Mike Woa, that’s a game changer definitely. The Burdick Chocolate (http://www.google.com/search?hl=en&client=safari&rls=en&q=Burdick+Chocolate+Cafe+Boston&aq=f&oq=&aqi=) example clearly shows indexation. I wonder how they’re going to determine ranking and relevancy for their own pages.
September 27, 2009 at 6:13 pm
Seb: I don’t think the IYPs can match this. Maybe they can prove me wrong. Kosmix a long time ago started to do something very similar to this: http://www.kosmix.com/topic/las_vegas
September 27, 2009 at 6:13 pm
They wont be indexed anymore, because
Disallow: /places/
appears in robots.txt.
But that wasnt there right at the beginning (possibly an oversight, or only added in reaction to community feedback), so some pages have been indexed, but they will (or should) fade.
September 27, 2009 at 6:14 pm
Thanks Barry. It would be very bad form for Google to say X here and do Y.
September 27, 2009 at 6:20 pm
Ah only just read Mikes post, its try that they wont be indexed in themselves. They can still be ranked by links to them. In which case appears as a simple link – no snippet.
Interestingly its mostly places mentioned in all the blog posts talking about places that are of course being linked up…
September 27, 2009 at 6:25 pm
‘try’ in the first sentance was meant to be ‘true’ 😉
continuing… I’m not sure the ‘maps’ people can do anything about this type of indexing, its effectivly out of their control. Google ‘search’ try to be helpful in indexing all it can. The keywords in the url obviouslly cant be hidden. It would probably need a ‘fix’ by the search team to exclude these.
September 27, 2009 at 6:48 pm
Here’s another (possibly hair-brained) scheme. Think of these pages as landing pages. Google could provide tools that optimize the performance of these pages both for incoming ‘organic’ traffic and for Adwords campaigns — possibly in a fully automated way. This could lead to a greatly simplified way for Merchants to advertise using Adwords (or other ad vehicles for that matter – mobile, display, etc.).
(And they could give merchants a Google Voice number for tracking the calls.)
September 27, 2009 at 6:59 pm
@pedictabuy mot hairbrained at all…it makes the 60% of smb’s that don’t have websites better targets for adwords if not by Google then resellers.
September 27, 2009 at 7:02 pm
@Mike can a third party manage a merchant page? If yes, then your reseller idea makes even more sense.
September 27, 2009 at 7:02 pm
The idea that these could function as landing pages is not far fetched at all.
September 27, 2009 at 7:06 pm
@Sebastien it is not ideal but yes they can manage the business listing in the LBC with the businesses permission and help with verification.
September 27, 2009 at 7:08 pm
Although it will need to be tested as Google’s algo based content checking in the LBC flags anything with the word Google in it, even a Google Site url…
September 27, 2009 at 7:12 pm
Seb: Look at it this way — anyone can link to these pages and anyone (unless claimed) can edit them.
September 27, 2009 at 8:41 pm
Just wrote a post outlining in more detail how the Place Pages are designed for optimization and could be used as landing pages: http://bit.ly/2JJKZm.
September 27, 2009 at 8:57 pm
[…] Place’s pages were introduced it was noted that they were not going to be indexed (there is a great discussion going on at Greg’s blog now) leaving the impression amongst many that they would sit, […]
September 27, 2009 at 11:01 pm
[…] wrote a post that discusses the potential indexing of Places Pages. And on my personal blog, Screenwerk, there’s a discussion in the comments about whether this is inevitable and the potential […]
September 28, 2009 at 12:40 am
[…] get indexed and start appearing in search results for local places and local business names then it becomes a game changer. The major Internet Yellow Pages (IYP) sites must all be doing laundry right now to clean out the […]
September 28, 2009 at 12:46 am
[…] wrote a post that discusses the potential indexing of Places Pages. And on my personal blog, Screenwerk, there’s a discussion in the comments about whether this is inevitable and the potential […]
September 28, 2009 at 2:04 am
[…] Click the map and then “more info” and you arrive at the new page — being created for everySource: Screenwerk RSS Feed […]
September 28, 2009 at 7:27 am
Greg,
No conspiracy theory here, as I’ve mentioned when we talked Place Pages are not meant to be crawlable with this launch. This was an oversight on our part – we didn’t block all url paths and left maps.google.com/place open. Its now closed in robots.txt and we’ll make sure all other paths are blocked as well
Lior.
September 28, 2009 at 12:52 pm
Greg,
I always read your blog and one of the things that´s always on around is “the local flavor”.
How do you think this new release from google will compete against some already well-established local guides like yelp, yellow pages, citysearch, etc.?
At first sight it seems that its direct competition… but from somewhere google will have to collect the data it shows (maybe from this sites?)
September 28, 2009 at 3:17 pm
Thanks Lior for clarifying
September 28, 2009 at 3:23 pm
Google has said these pages won’t be indexed. So competition will be indirect. They will strengthen google maps however as an overall competitor.
September 28, 2009 at 4:06 pm
Lior, you might want to read Eric Enge’s interview with Matt Cutts or speak directly to Matt about this:
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
Google’s interpretation of Robots.txt rules have been a bit more literal than other search engines – Google will index pages that are disallowed under robots.txt – they just won’t crawl them.
So, you may be disallowing and you may be closing up link leaks in your structure, but these Place pages will be indexed as people outside of Google link to them, so you’ll have to do something else to keep Google from making those appear in the index altogether.
Incidentally, I’d be very interested in hearing what that would be. As Google now interprets more Javascript and Flash along with this literal interpretation, there are fewer and fewer ways of keeping some URLs from being indexed. If you have some other tool or protocol for doing so, it could be very helpful to the SEO community (I’ve been called in, for instance when highly sensitive banking industry pages have been accidentally indexed, and if there are links out of one’s control pointing to them, it can be quite difficult to put the cat back in the bag, so to speak.)
September 28, 2009 at 4:13 pm
I know it’s easy to assume Google has a big plan to take over with these pages, but they said they wouldn’t be indexed. So why are they showing up? Easy. They failed to understand the difference between the robots.txt file and the meta robots tag (which isn’t hard; site owners struggle with this). Many people have been telling Google for years that the robots.txt block should be enough to totally keep pages out of the index, but oh no, Google just had to have a way to keep showing pages. Well, now it bites them in their own butt a bit. Postscripted to our SEL post to explain this more.
September 28, 2009 at 4:22 pm
@Danny,
How does the meta robots tag have anything to do with this?
The meta tag appears in the actual page, so it can only have an effect if the page is actully crawled. But because the page cant be crawled due to the robots.txt rule, GoogleBot will never see such a meta tag.
And if meaning a ‘nofollow’ in the page containing the link – for the most part these pages are going to be outside the control of Google, for example all these blog posts talking about Places is creating loads of links to places pages (which are then potentially ranked).
I agree with Chris there is no real way to remove these results without higher intervention from the web search team. However maybe the ‘Remove Directory’ tool in Webmasters Tools could be used, if the Maps team have access to it for maps.google.com
September 29, 2009 at 2:28 pm
@Barry and Danny,
I initially agreed with Danny in that the meta noindex tag would definitely prevent the pages from appearing in the index. My thoughts were that Googlebot was disregarding the robort.txt file and partially indexing the page because of heavy linking. However, look at the title they are using for the recently popular search for “burdick chocolate cafe boston”
The Google Places search result listing usues the title:
“Burdick Chocolate Cafe in Boston”
However, the html page title on the page is:
“Burdick Chocolate Cafe – Google Maps”
This must mean that Googlebot is never accessing the pages. Google will modify meta descriptions but never page titles. So is Google getting this information directly from the anchor text that is linking to this page?
September 30, 2009 at 1:48 am
Sorry but Over at Tech Crunch, Matt Cutts followed up with another reply regarding the URLs showing up in search results even though robots.txt files prevented the crawling of the pages.
He refered to a previous post of his regarding this issue here: http://www.mattcutts.com/blog/googlebot-keep-out/
Matt said:
“You might wonder why Google will sometimes return an uncrawled url reference, even if Googlebot was forbidden from crawling that url by a robots.txt file.”
“There’s a pretty good reason for that: back when I started at Google in 2000, several useful websites (eBay, the New York Times, the California DMV) had robots.txt files that forbade any page fetches whatsoever. Now I ask you, what are we supposed to return as a search result when someone does the query [california dmv]? We’d look pretty sad if we didn’t return http://www.dmv.ca.gov as the first result. But remember: we weren’t allowed to fetch pages from http://www.dmv.ca.gov at that point. The solution was to show the uncrawled link when we had a high level of confidence that it was the correct link. Sometimes we could even pull a description from the Open Directory Project, so that we could give a lot of info to users even without fetching the page.”
September 28, 2009 at 4:40 pm
Barry, the noindex meta tag will tell the engine not to INDEX the page. So, yes, it has to be crawled for that to happen, but it would keep the page from showing up in search results listings.
So, this is partly what I was alluding to — if you’re Google Maps and you don’t want to spend tons of unnecessary CPUs in delivering pages to Googlebot to NOT be indexed, then you’re out of luck. You either have to expend tons of unnecessary CPUs telling Googlebot not to index, or else you end up not truly keeping pages out of the index — which is their current dilemma.
September 28, 2009 at 4:45 pm
I just did a quick analysis over at http://www.seeingforests.com/google-places-redux/ on the different kinds of content available at different page “levels”. Was useful in the context of looking at the balance here between the auto-generated and the human-curated place pages. I’m not sure about the indexing but I’m wondering whether simply having more of this kind of content might address the “Local Paradox” by making people more aware that they CAN search for this kind of content and by so doing drive traffic to across the local spectrum.
September 28, 2009 at 4:47 pm
Sorry yes, that was my point, they need robots.txt to prevent crawling, in which case meta tag is useless. Wasn’t considering they could do it without the robots.txt rule 🙂
September 29, 2009 at 5:53 am
Is this new?
Blue Bottle Cafe
This place has unverified edits. Show all edits »
the reveal presents the history of edits. I haven’t noticed this before.
September 29, 2009 at 9:59 am
Not sure re the history of edits. It could be.
October 8, 2009 at 4:18 pm
[…] issue Mike, David and I talked about was Google’s non-indexing of the Place Pages. The decision was likely made to avoid alienating Google’s reseller partners, many of which […]