Bing it on! Comparing Bing Snapshot with Google Knowledge Graph

When we check the search engine market share, Google clearly dominates the market. At a global level, Google is expected to enjoy around 58%, while the second player is China’s Baidu with around 29% market share. Bing takes the third spot with close to 8% share. Considering Baidu is predominantly focused on Chinese market; Bing could be considered the second most preferred search engine at a worldwide level. If we consider the US market, Google enjoys around 68% share and Bing enjoys close to 19% share.

We have been discussing about Google algorithm updates in the last few blog posts. A natural question that will arise is does Bing  or other search engines also follow a similar algorithm update exercise and  technology ? In this post, let’s try to understand briefly about these questions from Bing’s point of view!

Bing was launched in 2009 as a successor of Microsoft Live Search. While still not a great contender to Google, Bing offers most of the features Google offer if not more. Bing has
The search engine – Bing (like Google search engine)
  • Bing Maps (like Google Maps)
  • Bing Local (like Google + places)
  • Bing Satori (like Google Hummingbird)
  • Bing Snapshot (like Google Knowledge Graph)
  • Bing Cortana (like Google Now)

The biggest difference (read USP) of Bing when compared to Google is the integration of social element. I consider the social component to be having two parts – an active one in which enables you to login to your Facebook account and a social sidebar is activated while you do the search in Bing. You can use this feature to search for friends near your locality while traveling or ask for suggestions and so on. The passive component of social element provides Facebook results, twitter, quora, and other social platform based options in Bing search results (for example Pin It option in image search! Or showing klout or other community Q&A results in Bing search results). Unfortunately, some of these feature are not available worldwide (for example Bing tags are available only for US users).

Since we have been discussing about entity search in last few blog posts; let’s try to understand how Bing compares with Google’s Knowledge Graph arsenal. Like Hummingbird, Bing powers its entity search engine with the help of a technology called Satori. As explained in the blog post on Knowledge Graph, Bing Satori or Google Knowledge Graph tries to understand the search queries (whether person, place, thing and so on) as an ‘entity’ that has several different possible connections with other ‘entities’ in the web world. Thus ‘Mahatma Gandhi’ as an entity can be considered to be related to ‘India’ entity as the relationship – ‘Father of Nation'; related to ‘Kasturba Gandhi’  entity as ‘wife'; related to ‘Jawaharlal Nehru’ entity as a ‘Indian freedom fighter’ and so on. The beauty of Bing Satori is that it brings in the social angle. Thus you need not be a celebrity to get into Bing Snapshot as opposed to Google Knowledge Graph.

If are to look straight on between Snapshot and Knowledge Graph, both looks pretty much the same. However one cool feature I liked in Bing was, as soon as you search for something and if Bing recognizes the entity relationships, it shows a preview in the search suggest itself! Moreover, we can click on the links in preview to go directly onto an another information if that interest you more. For example, consider the search query – ‘actors in bangalore days’ below. Without looking into the actual search results, Bing offers an option to directly goto the search result for ‘Fahadh Faasil’!

We can do a head to head comparison of search results in Bing viz. Google. I will leave it out to you…isn’t it fun?  :)

Here I will try to provide some examples where Bing provides a better result compared to Google. Let’s start with ‘Search Suggest’ option. Provided below is an example of a search query – ‘Java’. I am logged in to both Google and Microsoft accounts. I believe relation to Java software could be in association to my search history (instead of Java as a place!). However the interesting piece is Bing provides internal web page links in the suggest area if it understand the entity relationship! Isn’t that cool…?

The Bing Snapshot is more feature rich compared to Knowledge Graph in certain situations. For example if the search query is related to audio (Eg: ‘Indian national anthem’), a link to listen is provided; similarly if the celebrity have spoken at TED, the Snapshot provide link to the talks and so on. Bing Snapshot also aims to be a true knowledge provider. For example try searching ‘dolphin’. The amount of information provided by Bing Snapshot is way superior to Google Knowledge Graph!. Having said that, it fails in lot of search queries (for example , try ‘fermentation’).

Overall, I believe Bing provides some interesting ‘information’ when it comes to entity search and it could very well be a good competitor to Google in future. However with the strong position Google enjoys and the inertia of moving away from ‘Google it’ psyche, Bing has a long way to go…

What do you think…?

Google Hummingbird update and the opening up of entity search

Unlike Penguin and Panda updates that we discussed in the previous posts, Google hummingbird is an update to the search platform itself. It was released in September 2013. Hummingbird is considered to be first of its type update since 2000’s. In essence, hummingbird tries to add intelligence to the whole search phrase by considering the meaning of the phrase. If PageRank was the buzzword in 2000’s hummingbird and entity search is the buzzword today. In my opinion, this was not an overnight change, but an experiment and improve approach starting with the introduction of Knowledge Graph.

Before we dive deep into Hummingbird, let’s focus very briefly on two associated concepts for this blog post.

Google PageRank
If you are in the search industry for at least some time, you know this the ABC of how Google works. PageRank was and may still be the holy grail of how Google search engine works. Having said that, various studies have indicated PageRank is just one of the over two hundred components or ‘signals’ as they call it in deciding the search results. At the very basic level, PageRank is an algorithm component in which links to a website/page is considered votes and in turn used to decide the relevancy and credibility of that web page. Though the caveat here is, the quality of links is also considered; thus the mere high number of inbound links won’t get you a better PageRank. Provided below are two articles that explain the topic exhaustively.

Google Caffeine Update
Caffeine was an infrastructure update to Google ecosystem in 2010 unlike a search algorithm change. Before Caffeine update, the crawling and indexing was based on a batch mode.  So irrespective of how large the batch is, all web documents were pushed to live after the complete indexing procedure. With the caffeine update, Google is able to crawl the page and looks into the index and push live all in a matter of microseconds! This enormously improves searcher’s experience. It is also expected that the storage capacity and index size was increased with this infrastructure update. Haven’t you seen the search results changing on the fly while we type in int Google? Thanks to caffeine update!

Google Knowledge Graph
We discussed about Knowledge Graph in the last blog post. In short, Knowledge Graph tries to provide information about search queries rather than mere links to web pages that talks about the search query. It tries to consider data as entities and defines relationships between these entities. 

Read the blog post to understand better what Knowledge Graph offers you today.
Today, we are moving away from the concept of search based on documents and links to search based on data and relationships. Knowledge Graph is the back bone. While introduction of Knowledge Graph was to make the search engine results more facts and information; Hummingbird opens up  the world of semantic and entity search. It is the beginning of conversational search. Conversational search tries to give direct answer to search queries like ‘what is’, ‘when is’ and ‘what for’ types. In short Google tries to understand the intent of your search. For example are you trying to understand about a newly launched mobile phone or are you trying to compare between models or are you looking to buy one now. The search results vary depending on how Google interprets the intent of your query. Some of the factors that Google could use to giving meaningful search results could be
  • Synonyms of keywords
  • Keyword location / substitution and analysis of co-occurring terms to gauge the meaning
  • Geo location
  • Search device

Last leg of hummingbird update is the incorporation of voice search, especially in mobile devices and how search results are shown in mobile devices (no surprise there since factors like geo location can be better identified and thus personalizing the search result).
We will look into entity search in detail in another blog post. However let’s look into one of the foundation component which drives this showcase of facts and information. It’s all about making and utilizing structured data; And schema is one way to achieve it. Schema is a markup which provides meaning to the web page components. For example if you are talking about reviews in your website – you can use the mark up to identify the section as Reviews so that a search engine can pick up the rating directly for an associated search query. Schema allows you to define entities. For example, a Product, its specifications, its reviews , price and so on. Another example being the the rich snippets that we discussed in the blog post about Knowledge Graph is powered by schema. Schema is not the only markup, we have others like RDFa, microdata and so on. Thus we could consider markups being the underlying requirement for semantic search.

Stay tuned for more on the world of semantic search…

Google Knowledge Graph – Gaining knowledge without visiting web pages

We have been discussing about various Google algorithm updates. As you might have guessed, next in the line is Google Hummingbird update. But before diving into Hummingbird, I thought of covering Knowledge Graph in this post since that’s the foundation stone. In a later post, we will dive deeper into Hummingbird and the whole new world of Semantic Search.

In the world of PageRank, we have been dealing with unstructured data; i.e search engines were looking for keywords independently and trying to gauge the relevancy of a web page compared to a keyword. We are presently moving to a world of structured data; and thus the world of semantic search. Semantic search tries to gauge the meaning of search queries with the help of microdataschemas, or RDFs (Resource Definition Frameworks) and more. Google Started to move in this direction with the introduction of Knowledge Graph in 2012. From a layman’s point of view, Google started showing facts in addition to the usual search results for keywords involving famous personalities. From a slightly technical point of view, this was achieved with the help of link graph model that Google incorporated in its algorithm.

Let’s fast forward to November 2014 and see where Knowledge Graph is today. Let’s start with an example search for ‘baba amte’ results in a knowledge carousel in the right and pulls information from Wikipedia (or other credible sources) along with the usual search results.

If we dig a little more, and click ‘more’ in the Awards section; it takes you to another search result; this time for ‘baba amte awards’. It gives the details of awards, Baba Amte received in the search result itself as below. Essentially search engine are moving in the path of not making searchers surf various web pages from SERPs; instead provide all information in SERP itself (or at least the basic facts)! Google is able to provide the information at this level because of the crawl and grouping of information in the web into meaningful entities – like let’s say social workers, politicians, actors and so on; The interesting part is not only are the entities considered; but also relationships between them – For example, actor-award-films-personal. We will discuss about entities in detail in a future post.

Google + and Knowledge Graph
Though Google + is not relevant any more since they reduced the importance of G+, an additional entry in panel is the connection with Google +. Let’s assume the celebrity you searched for is also active in Google +. In that case, G+ posts by the person will also be shown in the knowledge panel (carousel). The panel continues to evolve with other options such as ‘Keep me updated’ options and more. Some other interesting information tidbits provided by the Knowledge Graph include
Social media profiles, in the news and more
  • Relationship information provided in the ‘People also search for’ section. If you search for an actor; the knowledge panel shows a hover assist with information on how the searched actor and other ‘also searched for’ persons are related.
  • Provision of social media profile links. If the person (entity) you searched for is active in social media platforms; a link to those are provided in the knowledge panel.
  • Nutrition and other information in food categories. For example – consider ‘potato’. It provides nutrition information and what not! The option varies depending on queries.
  • In the News section. For relevant search queries, results based on news articles are provided separately.

Now let’s focus our attention on top section of the SERPs. Consider the search query ‘Thailand tourism’. The search results provide a set of possible tourist places along with links and photos. If you click on a particular place, the search results will automatically be changed to the associated place! A similar carousel will be shown for queries like ‘things to do in thailand’ or ‘places to visit in Bangkok’ and so on.

Birth dates to current city temperature
Now consider another example of search query –  ‘when is mahatma gandhi born’. It gives the date of birth and in addition, provides the birth dates of related persons!. Now consider an example of flight number – EY283. Or ‘Abu Dhabi to Kochi’. Some other example include answers like time information, simple calculations, temperature and more. As you can see the search engine is trying to move in the direction of becoming an answer engine instead of just search engine!

Addition of such knowledge entities have become so diverse that it’s limited only be imagination. It’s reported that the knowledge graph has integrated options such as zip codes, hangout options, or comparison between entities or even step by step instructions! Latest in the Knowledge Graph world is the utilization of structured snippets to show facts in search results. With this inclusion, information is extracted from data tables in a web page and shown in the search results. An example in the Google blog post shows features of Nikkon camera being automatically picked up in the search result. But I couldn’t find a similar view yet 

Isn’t the world of semantic search exciting….?

A note on Google Pigeon algorithms

Now that we have understood a bit about local SEO, let’s continue our discussion on Google algorithm updates. Today we will look into Google Pigeon update. Google Pigeon  is the youngest component of the Google algorithm updates. The first Pigeon update was rolled out in July 2014. While Panda updates focus on quality of content, and Penguin updates on link building practices; Pigeon update specifically focused on local search.

In essence, the pigeon updates assign more weightage towards locality over authority. Also it took care of some concerns SEOs had in the case of local search like – Google + page given undue importance, removal of results from  local  search directories like  Yelp. Also it is expected to be having an affect only on Google US search queries. Since Pigeon update is comparatively new and only affect local search part, it’s still in the nascent stage to really understand the affect of this update. However it has been testified that, the pigeon update  affected few sectors like real estate and at the same time benefited sectors like education or hospitality.  Let’s try to dissect what Google Pigeon update affected.

Directories favored over local business listings

As mentioned above, after Pigeon update, search results from directories like Yelp or Tripadvisor made a comeback. This also mean that you should have a consistent citation across directories.

Change in local listing pack
The local listing pack or carousel had some changes post this update. It is reported that, the number of listings in the pack were reduced to three from earlier seven for some search queries. Also it is expected that the local listing pack itself was not shown for many search keywords. Another  change expected out of this update is reduction in duplicate results – i.e a website/business may not be shown in both organic search result and map pack at the same time.

Reduction in search radius
After the update, it is reported that Google has reduced the search radius while showing the local search results. This means the results for a search query will show listings much closer to the searcher’s location.

Brand v/s Local business
It is reported that in map listing and carousel, local business listings are shown comparatively on higher proportion when compared to brands for the same search queries. Also association between search keyword, brand name with domain have become positively correlated.

Long Tail keywords still the key
Optimizing for long tail keywords continue to show increased traffic. A combination of long tail keywords and targeted location specifics are now required. For example – 2BHK suite apartment in 13 Birds Road.

Book Review : Why I failed by Shweta Punj

Why I failed - Lessons from Leaders by Shweta Punj attracted my attention since most of the books on business/self-help talks about success or how to succeed. This book is a collection of failure stories by well know personalities. I started readin it with an expectation to get a different perspective; but to be honest it disappointed me.

Most of the failure stories are told at a very high level and fails to describe the details. I am sure most the readers wanted that extra-piece of narrative on what happened, how it happened, its consecunces and how leaders overcame it. Sometimes I felt the book is just another one talking about success and how to succeed :) The only piece of thought I could take out of the book was - everyone including the leaders that we adore goes to phases of failure.

The book talks about 16 stories of leaders from various domains. The ones I felt inspiring were Sabyasachi Mukherjee and Sminu Jindal. The least digestable ones were like the ones of Abhinav Bindra (was the failure really due to pressure of expectations? or just overconfidence?) and Narayana Murthy (because I have read marvellous narratives in other articles)

Overall, a good read; but don't over-expect. I felt it to be more how they succeeded stories than failure stories!

Understanding Google Panda algorithm updates

In the last blog post, we discussed about Google Penguin updates. Today, let’s try to understand Google Panda update. The first release of Google Panda was in February 2011. While penguin updates went after back links, the main purpose of panda updates are to watch the quality of website content.I believe by far the biggest affect in search results were made by Panda updates. Panda updates fights scraper sites and content farms. A scraper essentially copies content from various websites using web scraping. Web scraping can be done by simple manual copy and paste or to the extend of using sophisticated web scraping software. On the other hand, content farms are websites with large volume of low quality content often copied from other web sites or being copied in multiple pages of the same webs site. Content Farms usually spend efforts to understand popular search keywords and spin out content based on those phrases.Thus Panda update was also called Farmer Update :)

Unlike Penguin updates, Google Panda updates are more frequent. It is expected that the algorithm has seen close to thirty refreshes. As mentioned about quality of content is the main focus of these updates. However quality doesn’t  confine to just the written quality of  content. Some of the major factors that could determine the quality of a website are

Quality of written content
Are you an authority about the topic. Is your content original? Will a reader benefit out of the written content you have put in your website? For example, this will be of utmost importance in case of e-commerce sites where there is a tendency to put minimal, duplicate or manufacture wording as product descriptions. These are the factors that determine the quality of written content.

Duplicate content
Are you using the same content in different web pages of your site (even if with minimal wording changes), then you are risking of getting affected by Panda. One example where this is affected unexpectedly is blogs. For example if you are using both tags and categories in your blog, this could be considered as a duplicate. But don’t worry, techniques like using noindex tags will help overcome this issue. Another method is to use canonical tags to tell Google which page to index. Duplicate content is not just limited to your website, it could be outside duplicate content too. For example – micro-sites or your own blogs where you either tend copy paste the same content or make minimal changes. Here is a good Moz article that describes various duplicate possibilities in the eye of Google.

Let’s take an example to understand better the above to points. Consider the book – Half Girl Friend in Amazon or Flipkart. As you can see the product description is kept different in both. Both are different from the description Chetan Bhagat gave in his website. Also we could reach the product page in Amazon or Flipkart in multiple navigational paths or URLs (like category, search direct etc.); but google indexes like the main product page.

Ad to content ratio
One of the elements used by the algorithm to test the quality is number of ads shown. While there are no optimal number, its best to be judicious while selecting the number of ads shown. While there are no optimal ratio for this, keeping the number of ads to minimal always keep you safe. It’s expected that algorithm has really matured to understand the ideal duplicate and ad to content figures.

Technical aspects
Technical aspects that differentiate the quality of a website including performance metrics such as page loading time, UX aspects like navigational ease & design and factors like proper re-directs, 404 pages and so on.

Finally, here is a Google Webmaster central blog post on what Google thinks a high quality website mean?

While the overall check of panda updates was on quality of the websites, third version of panda updates (specifically 3.3 onward) focused also on few other aspects. One improvement was around local search results. This update concentrated on improving on showing relevant search results from a proximity of user/content point of view. Another improvement was related to link evaluation(usage of link characteristics like wording, # of similar links etc to understand the content of the linked page). This was taken to an altogether different level with penguin updates. Freshness of the website was also considered in this update.

Google panda updates starting 3.0 were very regular that it became difficult to specifically focus on each update. Industry experts even opined that the dead Google Dance is back with these regular refreshes. Now the panda refreshes are tracked based on consecutive numbers rather than following a versioning system. From late 2013, Google started to include panda updates with index algorithms; thus becoming part of normal algorithm changes Google makes through-out the year. Panda 4.0 was released in May this year. This update specifically improved  on how Google views authority and originality of articles in the web. Here is a good case study on this from Search Engine Land. Another affect out of the Panda 4.0 was on how PR websites are ranked. Many PR websites were affected and had to re-define their publishing guidelines. The latest one was 4.1 or 27th Panda update in September 2014 which is expected to help small websites with original content. Also those websites doing recovering actions are expected to benefit from this update.

One thing we can understand by studying about these updates and their history is that all SEO practices revolve around these updates (like link building, content marketing and so on…)

Understanding Google Penguin updates

Let's talks about various Google search algorithms in the next few posts.

The main Google algorithm updates that we often hear about are Hummingbird, Panda and Penguin updates. Other components include Pigeon and Knowledge Graph. It’s interesting to note that most of these names are not coined by Google; but by various industry sources in the SEO world. Also, the above mentioned names are the ones which are relevant today. If you are a professional in this space for long, you would have heard updates starting from Google Dance to Fritz to Caffeine and more. Moz provides a good history timeline of Google algorithm changes along with associated key news articles (Google Algorithm Change History by Moz). In the next set of blog posts, I will try to explain the different components of Google Search algorithm updates and how they may be relevant to your website.
Google Penguin Updates

What are Google Penguin updates?
Let’s start with Penguin update since that had a recent update. When Penguin update was first launched, it started focusing on technical aspects of SEO like keyword stuffing and link building. Linking building strategies totally got changed because of Penguin updates. Penguin has gone through six updates and throughout it main focus has been on back links and link building practices.
First Penguin update was in 2012 and it targeted at reducing web-spam, specifically penalizing those websites violating Google’s webmaster  or quality guidelines.These malpractices include keyword stuffing, link farms, usage of doorway pages and duplicate contents. There were two more refreshes in 2012 for Penguin 1.0.

The next major update (Penguin 2.0) was in 2013. Some of the aspects that this update were how Google views advertorials and continued refining of back linking practices. Advertorial was not completely penalized; instead by posting visible disclaimers, websites had to make sure readers will be able to figure out the advertorials. Fortunately, the historic linking practices were not completely penalized; but it penalized any malpractices beyond Penguin 2.0 launch date. Some other interesting changes made in this update were how an authority get ranked better in a particular domain. For example, if your website is an authority in pediatric queries/articles, you may get a better ranking for those search keywords.Thus SEO effort was getting more aligned towards the content marketing strategies.This also meant a shift of SEO focus from outside to in.
Fifth update of Penguin (Penguin 2.1) was released by Google during October 2013. While there were no visible and significant changes in this update, industry experts consider this update improving the level of link-bait analysis;and identify spams at a deeper level in websites. Inherently, recovering from penguin updates meant looking at your link building practices. Also there were many cases of recoveries during this update. A study on Google Penguin 2.1 update by MathSight later this year found that there is linkage between quality of the content, anchor texts and links. Some of the factors included the quality of words and syllables in the body text; thus giving importance to readability. Having an impact due to anchor texts means – it could have an impact due to internal links too. Over-usage(optimized) of anchor texts and associated internal links (like link grids) may have adverse affect due to penguin update if it was meant for SEO and not from a UX perspective. Other impacts of Penguin 2.0 included the increased importance to contextual links (links that are contextual and relevant to an article), penalizing article spinning and unethical usage of micro sites. Some industry experts like Glenn Gabe argues Penguin affects SEO tactics like forum spam.

The latest update for Penguin was this year – Penguin 3.0 which was released on October 17, 2014. It is expected that in this refresh, some sites with bad linking practices will be demoted further and at the same time may help the sites hit earlier and taking recovery actions will be benefited.

How to test whether your site is affected by Penguin updates?
We can understand whether there was an impact by looking at the traffic.
  1. To understand the impact quickly, you could do a site wide search using Google (using ‘site:’). If you are not able to see any page results from your site, it means you are affected.
  2. Use any free tools like Fruiton or Panguin. These require giving access to your site analytics. Essentially these tools provide a time chart overlapped with key dates when the algorithm updates happened. So you can conclude your site is affected if you see a reduction in site traffic around these dates.
  3. If you think there is a possibility of manual penalty from Google, you will get a message in the webmasters tool.

How to recover from any Penguin penalties?
This is a comprehensive topic beyond the scope of a single post. The most important step is to do a back-link analysis. We have to be very careful while doing backlink analysis to differentiate between various quality links.The best place to start is your Google Webmaster tool. It provides a sample of links to your website in ‘Links to Your Site’ section in Search Traffic. You can also use other third party tools like Majestic SEO or Open Site Explorer. Group the links into which you think are malicious, which are of high quality and so on. You could ask the website owners to remove any links that you think are spams/not worthy.Or else you could use the Google disavow tool to remove the links from dodgy sites.But one needs to be very careful since it can result in un-wanted issues.Make sure to read the Webmaster Tool help on this topic.

If you are not affected by penguin updates or is a new website in making, then the best tactics to keep away from any penalties are
  • Keep the content relevant for the reader and not the search engine
  • Use social media to gain relevance and links
  • Avoid any paid link building schemes
  • Avoid misusing on-page SEO techniques

Understanding Local SEO

As I mentioned in the last post, I will continue to write about digital marketing topics for some more time :) In this blog post, let's try to understand about Local SEO, what it is and how to do it?

Local SEO deals with search results within a defined locality or geographic location. This can be considered a subset of overall SEO efforts focusing on showcasing your web details in SERPs relevant to location based search keywords.  Google search algorithm has evolved to streamline and showcase search results based on locality, intend of search and so on. Let's take an example - 'Dominos Cochin'

A search for Dominos Pizzas in Cochin provides search results as show above. Two components are of interest to us. Along with the regular 10 search results, it shows a map with pins pointing to the location of Dominos stores in Cochin, Kerala, India in the right top end. On the left we can see Google Plus pages pointing to Dominos stores pages. This is populated from Google + Local pages. Local SEO in essence is trying to get the such listing of your business at the top in such results. A similar experience is offered by other major search engines like Bing. For example, let's consider the same search query in Bing.

In the case of Bing search results, instead of Google + Local pages; we see Foursquare pages of Dominos. Thus as you might have guessed, the first step in local SEO is nothing but creating the appropriate business listing in Google + Local, Foursquare or Yahoo Listing. Creating this listing accurately helps in populating the results in both map and the local search result carrousel. Having said that, it's not just enough; another important aspect is citations. Citations are nothing but listing of your business in other online directories - like Yelp or Tripadvisor or Justdial and so on. Any information provided that can identify your business can be considered as a citation (like Name of the company or Phone number). The most importance caveat here is to keep the information given across these platforms consistent. In a way, this is going back to the old days of SEO - adding to directories. Having said that, citations doesn't mean - limiting yourself to these directories; the platforms could be any thing from Google + pages to niche platforms like Tripadvisor or simply your own blog.

Adding the listing in these directories is just the first step. The next obvious step is keep the listing vibrant. One possibility is with managing reviews in these sites. Tripadvisor is a good example. Consider you are a resort in Chennai; Reviews, testimonials, photographs uploaded by both you and visitors etc. add credibility to both the search engine and prospective customers. In essence this is an overlap happening between SEO, community management and social media. This is equally applicable to Google + Local or Foursquare.

The next pillar for your local SEO is your website itself. Adding the consistent citation in your website - whether it is in your contact us page or in footer, is very important. This is in fact one of the signals used by search engines to decide the local relevancy of a website or business. Utilizing a schema markup for this largely helps the website. Schema mark-ups convey search engines aspects like reviews and business hours from the website content and helps in populating the search engine pages.

For example, using   tells search engines that your business is open Monday to Saturday from morning nine o'clock to night nine o'clock. If your site comes up in SERPs for a keyword, it will be shown something similar to below -

Utilization of schema markup is not just limited to NAP (Name, Address, Phone) or working hours; for the complete list of possibilities, visit web page on local business. Another important utility of schemas is in managing testimonials. Having a page or section in your website for testimonials and adding a proper schema format to it helps not only gaining credibility to visitors; but also search engines may show them in search results.

Finally, we haven't talked about the most important player in local SEO - The Mobile. Day by day, mobile device is becoming the backbone to local searches. Enhanced GPS capabilities, better screen capabilities and connectivity helps in improving the search quality in mobile devices. Thus the final pillar of an effective local SEO strategy is giving its due respect to mobile websites. Making your website readable in mobile, having an app strategy and so on comes into picture.

If you are looking for an exhaustive list of factors affecting local SEO, consider reading Moz's Local Search Ranking Factors
Let's take a deeper look into Google + Local search results coming in SERPs. Populating the local listing carousel depends on couple of aspects -

  1. The proximity between the business and the searcher's location
  2. How dynamic is your listed page in these directories. This not only includes how accurate the information is; but also about how creative were you in using the listing. For example, have utilized the Photos & Videos sections? , have you utilized the Additional Details section?
  3. Signals from usual search - signals such as how well are you reviewed, how good is your general website, and so on.