Net66: Google Celebrates Thor Heyerdahl with Google Doodle!

Google often celebrates people’s lives, specific dates, holidays etc with a Google Doodle. Today, they celebrate the author Thor Heyerdahl. In case you didn’t know, he is the author of The Kon-Tiki expedition. And if you’re unaware of what that is, it’s basically a recounting of his expedition across the Pacific Ocean on a raft.

Take a look at the Doodle below:

thor-heyerdahl

Blog Post by: Greg McVey

Net66: Google Explain How They Decide The Optimum Frequency for Crawling Your Site

As we all know, there’s a huge invisible, intangible creature roaming the web reading everything out there! But the Google spider really isn’t that bad. All it’s doing is reading your website, jotting down the key points and then storing this information somewhere else. In fact, the Google spider is somewhat like a librarian.

For this purpose, part of SEO is to make sure your website is as easy to read for Google’s spider as it is for a human to read your site. For example your site could look amazing to the user, but the code used to write the beautiful landscape of words and pictures dedicated to your services could be as muddled and confusing as one of Dan Brown’s books.

Another one of the main points for optimising your site for the crawler is making sure that your site can be read as frequently as possible. Because you want to update that site a lot with fresh, valuable and quality content.

Two main factors of this, as confirmed by Gary Illyes (Googles Webmaster Trends analyst), are the following:

HTTP Status Codes

There are a range of different HTTP Codes. The most common is 404 (Page Not Found). But there’s also 301s, 302s, 200s and a lot more. Google can handle these codes quite well, but when it starts to get http codes in the 500 range, this could spell trouble.

The 500 range indicates that there’s something wrong with the server. As such, Google won’t want to risk any further harm to your website so will stop crawling you for a while, giving your website time to recover/get fixed.

If you do have 500 error codes, make sure you get them fixed right away and then “fetch” your website in webmaster tools to make sure that Google can now read your website again.

Connect Time

As above, when Google detects a slow connection to your web server it will assume the worst. That your website is experiencing issues and any further connections to the site will cause further issues to your server and then break it.

Due to this assumption, it will again limit the number of crawls that it will run on your website.

So, all in all, if you want Google to read your site correctly and frequently clean up your 500 error codes and make sure that your server is in tip top condition.

Blog Post by: Greg McVey

Google Competitors Launch Website Claiming Google Hurts the Internet

Google’s competitors have today launched a new website explaining how Google is hurting the internet.

The group of companies is called Focus On The User. Aptly named as what this group of companies suggest Google is doing is promoting their own services before using competitors websites.

The group of companies include names such as: Consumer Groups like HolidayCheck, which include Fight for the Future and Consumer Watchdog.

Take a look at the video they’ve released showing how Google hurts the internet:

Blog Post by: Greg McVey

Net66: Routemaster Bus Gets Google Doodle Update

Google are now famed for their Google Doodles and it’s no surprise today that they’re at it again. Today marks 60 years from the very first routemaster bus. Take a look at the Doodle below:

Google Doode On The Buses

A lot as changed in 60 years and from the very first classic hop on hop off bus, to the modern hop on hop off Judge Dredd helmet looking one, the iconic image of a red London bus endures.

New Routemaster bus

Blog Post by: Greg McVey

Net66: Autumnal Equinox Celebrated by Google Doodle [Images]

As we all may be aware, today is the Autumnal Equinox. The point where the Earth is equidistant between the height of summer and the height of winter. Unfortunately (or fortunately if you like the cold like me) we’re heading towards Winter and not Summer.

This day also celebrates the first day of Autumn. Traditionally the nights are getting darker, leaves are turning golden brown and falling off trees and of course, the commercialisation of Christmas starts earlier and earlier every year.

Take a look at Google’s Doodle today to celebrate the start of Autumn:

google-autumn-1

google-autumn-2

google-autumn-3

google-autumn-4

google-autumn-5

google-autumn-6

What do you think of the Google Doodle?

Blog Post by: Greg McVey

Net66: China Have Blocked Access to DuckDuckGo in All Provinces

DuckDuckGo-LogoChina have long been known as a country that can be a bit oppressive. No where near a North Korea level of oppression, but the fact that China is N. Korea’s major ally shows that it’s not opposed to restrictive rights.

One of the main selling points for DuckDuckGo.com is that they’re are flying the flag for internet privacy. For example, DuckDuckGo don’t save your search data and don’t store any information that is send when you run a search.

For example, if you searched for “SEO in Manchester” on Google, Google would collect all the information from your search that it could. Including what operating system you’re using, what browser you’re using, what width screen you’re using and even your IP address.

DuckDuckGo doesn’t do any of these things so you can rest assured that your search remains private.

It’s this attitude towards private search that seems to have gotten DuckDuckGo banned from China. China have strict policies on what can and cannot be searched in their country, and it seems DuckDuckGo are not filtering their results in accordance to China’s policies.

It was confirmed by DuckDuckGo’s CEO yesterday in a Tweet which you can see below:

You can also check out the website blockedinchina.net to test for website blockages.

It is almost a compliment of China to recognise DuckDuckGo as a major search engine as it has also blocked access to Google in the past.

Blog Post by: Greg McVey

Net66: Google Makes Mistake Causing 90% Drop in Traffic for Buffer

It’s quite usual in SEO to see a small or large drop in traffic in accordance with certain algorithm changes, a new website going live or something similar.

What you wouldn’t expect is for Google to apply a Manual Action against your website by mistake, dropping your traffic by 90%!

90% is a huge loss of traffic. Especially for such a large company as Buffer. But even so, the fact that it was a MANUAL action and that it was also a Bug suggests that this was a serious error for Google.

Take a look at the screenshots from Buffer’s Analytics to see what a 90% drop in traffic looks like:

bufferapp2

bufferapp1

The team at Buffer reached out to John Mueller (Matt Cutts stand in) over Social Media and got the issue fixed. Upon the conclusion of this the Team at Buffer had the following to say:

Thankfully, our mystery has a happy—though slightly inconclusive—ending. Mueller’s team at Google found an issue and let us know they had fixed it. The manual action penalty was removed Aug. 28 and we began to see signs of recovery immediately after.

Blog Post by: Greg McVey

Net66: Google Set to use What’s on TV as a Ranking Signal?

Google have secured a new patent this week. Their patent states as follows:

A computer implemented method for using search queries related to television programs. A server receives a user’s search query from an electronic device.

The server then determines, in accordance with the search query and television program related information for television programs available at a location associated with the electronic device during a specific time window, a television program currently being displayed in proximity to the electronic device, wherein the television program related information includes program descriptions for a plurality of television programs being broadcast for the associated location.

How do you think this will impact search results?

Why Do Some Knowledge Graph Answers Have a Source And Others Don’t?

Google has long been committed to increasing the quality of search for its users. None more so than the “Quick Answer” boxes that sometimes show at the top of Google Search Results. Although this is great news for users, getting what they want much quicker, it is not the best news for publishers.

For example if you run an SEO Blog and write up a very detailed explanation of what SEO is, you can miss out on traffic as Google very succinctly sums things up on their knowledge graph box at the top of the page. See below:

what-is-seo

As you can see in this box, the above is used to show a generic definition of SEO and a link to Wikipedia for those looking to secure more information on the subject. Now this is great if your blog used as the source and will no doubt add a fair amount of traffic to your blog.

But what about other queries? Such as “How old is Boris Johnson?”. Take a look below:

boris

As you can see, the Google Knowledge Graph box is used again and his age is displayed. What you’ll also notice is that no source is given? Why?

Well according to Google, Boris Johnson’s age is “Basic Factual Information” so doesn’t really warrant a source. But SEO is deemed as “not widely known information” so requires a source. This also occurs when “relevant snippets are shown from a website”.

There are very few exempt cases, but with Apple launching their new iPhone 6 this week, Google credited the Apple website directly after receiving certain information about the phones from their rival.

Have you seen any cases like this before?

Blog Post by: Greg McVey

Net66: The 6 Elements of a Powerful Blog Post [Infographic]

Google is constantly telling webmasters that quality of content is one of the best things to help you rank. They stop short of telling you exactly what the quality is that they’re after though. Blog posts can be powerful things and a great place to add quality content.

Neil Patel over at Quicksprout has put together the following infographic which helps detail out just what make a powerful blog post:

6 elements of powerful blog posts

Blog Post by: Greg McVey

Net66: Google Penguin 3.0 To Be Released Within 2014

penguin-3.0If you’re an SEO then you’ll be acutely aware of what Penguin is. There have been several versions of the penguin algorithm but the most controversial were 2.0 and 2.1.

The penguin algorithm is designed to review links to a website and then take the appropriate measures based on the links it finds. The problem was with Penguin was that it was too evolutionary.

For example. Back in 2008, it was often the case of whoever has the most links ranks the best. These days we’re much more aware of what we need to do to a website to get it to rank which include multitudes of on site factors as well as the quality of links. NOT the quantity.

Due to how it used to be, there are/were a lot of websites that used perfectly valid techniques in their day, which were then deemed spammy by the Penguin algorithm. As such, the fallout from the update was massive with a lot of Webmasters bleating about how unfair it was for Google to judge current websites on their past behaviour.

It has it’s merits of course. But you can understand Webmasters’ frustrations at the algorithm. And now we’re set to go through it all again. John Mueller, of Google, said today in a Google Hangout that there will definitely be a Penguin update before the end of the year and it could be in the “reasonable” future.

Ambiguous ay? What might be reasonable to Google might not be reasonable to you and me. One of the major things to notice about this update is that it is termed Penguin 3.0. And not Penguin 2.2 or something similar.

That is because the update is meant to be a big one. Not only will it refresh all the data it has, but it will also update the actual algorithm, enabling it to run refreshes more frequently in the future.

Why wait though? Are they getting close to releasing it? My theory is that they’re pretty much done, they’re just waiting for Matt Cutts to get back to let the big dog announce the new release.

So, if you’ve been engaging in some slightly less than white hat link building practices, it’s time to grab a cloth and clean up.

Blog Post by: Greg McVey

Net66: Understanding Google Algorithms Updates and Refreshes

Google’s algorithms are often the subject of much discussion, chagrin, angst and other more colourful words. This is because Google themselves rarely announce when they release or are about to release an algorithm update.

Sure, once the fallout from algorithms reaches fever pitch they can sometimes say “Oh yeah, we did redefine everything is SEO over the weekend, thanks for noticing”.

They also give vague and nondescript warnings of “We’ll be releasing a new algorithm called [Insert_Animal_Name_Here] at some point in the future”.

There have been several rumours about Google launching a new refresh of Penguin or a large update of Panda recently. First off there were some suggestions of Google experimenting with a refresh of Penguin. Which got a lot of people very excited as they believe their sites are still “trapped” by Penguin and can’t get out until the algorithm updates.

Also last week there was a large scale suggestion of Google updating their Panda algorithm. But with Google already stating that they’ve stopped confirming Panda updates, we’re like not to know.

Now these algorithms are COMPLEX. Seriously complex, but thankfully Google’s John Mueller has taken time to write a helpful post in a Google Webmaster forum:

In theory: If a site is affected by any specific algorithm or its data, and it fixes the issue that led to that situation, then the algorithm and/or its data must be refreshed in order to see those changes. Sometimes those changes aren’t immediately visible even after a refresh, that’s normal too.

In practice, a site is never in a void alone with just a single algorithm. We use over 200 factors in crawling, indexing, and ranking. While there are some cases where a site is strongly affected by a single algorithm, that doesn’t mean that it won’t see any changes until that algorithm or its data is refreshed. For example, if a site is strongly affected by a web-spam algorithm, and you resolve all of those web-spam issues and work to make your site fantastic, you’re likely to see changes in search even before that algorithm or its data is refreshed. Some of those effects might be directly related to the changes you made (other algorithms finding that your site is really much better), some of them might be more indirect (users loving your updated site and recommending it to others).

So yes, in a theoretical void of just your site and a single algorithm (and of course such a void doesn’t really exist!), you’d need to wait for the algorithm and/or its data to refresh to see any changes based on the new situation. In practice, however, things are much more involved, and improvements that you make (especially significant ones) are likely to have visible effects even outside of that single algorithm. One part that helps to keep in mind here is that you shouldn’t be focusing on individual factors of individual algorithms, it makes much more sense to focus on your site overall — cleaning up individual issues, but not assuming that these are the only aspects worth working on.

All that said, we do realize that it would be great if we could speed the refresh-cycle of some of these algorithms up a bit, and I know the team is working on that. I know it can be frustrating to not see changes after spending a lot of time to improve things. In the meantime, I’d really recommend – as above – not focusing on any specific aspect of an algorithm, and instead making sure that your site is (or becomes) the absolute best of its kind by far.

Blog post by: Greg McVey

Net66: Google Doodle Celebrates Sheridan Le Fanu

Google are fondly know for their creative doodles that cover the front page their search engine from time to time.

Today is another we find ourselves confronted with a diversion from the usual multicoloured lettering. Today we see a dark and sinister portrait of one character asleep and the other floating above with an outstretched arm ready to strike.

This is of course a replication of the famous artwork based on the story of Carmilla, by Sheridan Le Fanu. Le Fanu was a famous Irish writer most prolific in the late 1800s and his work perseveres today.

Indeed, Carmilla has been the basis of several modern films. His most other famous works include Uncle Silas and The House by the Graveyard.

Check out the Doodle below:

sheridan-le-fanu-google-doodle

Blog Post by: Greg McVey

Google Creates New Easter Egg For Server & SEO Geeks

Today a new Google Easter Egg has been discovered. This little treat is aimed at SEO geeks and server geeks alike. It’s based on an April Fools Joke from way back in 1998.

Google released it as a 418 Error code for websites and isn’t supported or meant to used at all. However if you visit this page then you can see the error code in action.

What’s more, if you’re on your phone and you tilt it, the teapot will actually start pouring. And if you’re on a destop, clicking the teapot will make it pour.

Check it out below:

teapot1

teapot2

Blog Post by: Greg McVey

Net66: The Top 10 Changes To Google Search [Infographic]

Google have made some pretty groundbreaking changes to their algorithms and even introduced who new algorithms to combat change in the rankings.

In the infographic below we highlight the top 10 changes to the Google Search tool.

Net66 - Google 10 Milestones Updated

Post this on your Blog

Simply copy the code from the box below and paste it onto your website to put this on your blog.

Blog Post by: Greg McVey

Twitter Will Start Showing you Tweets from People you Don’t Follow

Twitter-IconTwitter have launched a somewhat controversial feature to their timeline today. The idea of the Twitter timeline originally was to display the users only tweets from people they follow, along with retweets. Now however this looks set to change.

Users will now start seeing Tweets from people that they don’t follow. What’s the purpose in this? Surely if you’re not following them you don’t want to see their tweets. But Twitter’s idea behind this is that if your certain tweets become popular or even go viral, then you’re likely to see them.

This isn’t based solely on virility. Twitter will look to see what percentage of the people you are following to determine whether it is relevant enough to display to you. If a high percentage of people you follow are interacting with the Tweet, then it’s more likely to crop up on your timeline.

Twitter have explained this on their Frequently asked questions page by adding the following text:

Additionally, when we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

It has rubbed some users up the wrong way. You can see why, people don’t want content displaying to them from sources they’ve not subscribed to. But at the same time, Twitter could be helping enhance the user experience by introducing new and fresh content to users out of the blue.

What do you think of this new move by Twitter?

Blog Post by: Greg McVey

Net66: Google Penalises Two European Link Networks

Google has a long history of penalising Link Networks. Here’s some examples. And this one.

This latest action comes whilst Matt Cutts, spam-fighting super techie and head of Google WebSpam team, is on annual leave. Which goes to show the strength in depth that Google have in their Webspam Team.

The Networks

Johannes Mehlem was the man in charge of breaking the news to the unfortunate souls who have found their websites penalised by this action. See his Tweet below:

Note that they specifically say a German network has been penalised, as well as saying a European network has been penalised too. This could mean one of two things a) the link network is so large that it spans a number of European countries and is therefore classed as European or b) Google want to hide the location of the links network for some reason. If it’s B, they haven’t done the best of jobs as Karolina Kruszyńska has posted another tweet in Polish announcing the same thing. Giving the distinct impression that the country the network has come from is in fact Polish. Check it out below:

So, good news for all SEOs with more of the spam getting cut down from the SERPs. And if you’re one of the websites that’s been hit by this update, you should probably stop the way you’re currently working.

Blog Post by: Greg McVey

Net66 Infographic: Why SEO is Still the Don!

Search engine optimisation, or organic inbound marketing as it is often known, is proven to be a cost effective yet chronically drawn out method of digital marketing. It is a slow burn process, and can take several months before optimisation really begins to show dividends. Improving a website’s structure, content and metrics, and improving its offsite perception requires a well planned campaign and a business’ commitment to it.

Because it can be a time consuming process to build up the organic capabilities of a website, people can often become frustrated with SEO and either give up on it or start investing in other, less effective, methods of marketing their websites and businesses.

Although Social Media has exploded in recent years and it is a very important part of SEO, it’s not as effective as SEO in generating inbound Leads. The same goes for outbound leads (cold calling) as they can often cost 80% than an inbound lead generated by SEO. Check out the full Infographic below:

Net66-Infographic

Copy the code below to embed the infographic on your website:

Blog Post by: Greg McVey