Category archives: Digital Marketing News – with extra visibility comes extra responsibility

Update 05/02: Google has rolled out a new update and shared some advice for reputation management companies, “don’t fake reviews as they will be taken down”. Google has posted some good and bad practises for customer reviews. In summary Google have said that review should not be summitted or digitized on behalf of a user, reviews should not be written about a company a customer works for and reviews should not include links back to the company website. the SERPs reputation management specialists announced yesterday  the acquisition of Liverpool based Reputation 24/7 and stridently launched into the UK.

But that vociferous launch now means that is under the spotlight and has even more responsibility to maintain a positive relationship with the search engines, who have long been the natural enemy of the type of service provides.

What is SERPs reputation management?

It’s the process of engineering a positive view of the search engine results when a user searches for a brand. Simply, a brand wants to maintain as positive an image as possible on the search results page – they don’t want loads of negative comments appearing when potential customers are searching because it could stop a customer from purchasing.

Google are in the business of delivering the best results, so if the best results are negative comments shouldn’t they appear at the top? That is the challenge of companies like because if they are seen to be forcefully manipulating the search engines they will be in trouble by those search engines. has a growing responsibility to be extra transparent with their practices now that they are receiving much more attention and press coverage. If they don’t they might go the way of Build My Rank and a host of other services that provide a purely technical fix to an SEO problem and in doing so get hit by Google’s algorithm updates.

There is nothing inherently black hat about what are offering, I provide a similar service for some of my clients, but the devils in the detail. If are engaging in bad practices and quick fixes then rest assured that Google will penalise them and then all that work will be for nothing. If Google penalise then their client’s will also be infected and could be penalised re-showing all that negative content that their trying to stifle and introduce a whole new angle to the reputation problems of the brand.

If done carefully and transparently could provide a very valuable service to brands and celebrities out there who feel ham-strung by negative content that isn’t representative of their wider customer’s experiences. Let’s not forget that some of the negative content is appearing high because of black hat tactics as well.

Transparency is the key and internal advocacy is the best to deliver that transparency to the outside world.

What to do when you get a “mysqld dead but subsys locked” on EC2

I recently had this problem on this blog (which is hosted on Amazon EC2) so I thought I would collate the research I found into one place to help anyone else that has had this problem in the past.

I’m running MySQL on the same instance as the website at the moment, I know this isn’t the best solution but it is the simplest solution for me. What that means though is that the memory MySQL needs is greatly reduced and can fall over.

The first I knew about this is when WordPress “Could not connect to database” message – big problem, so I’ve added free MySQL hosting to keep an eye on this going forward.

First time around I was able to simple restart MySQL from SSH:

  1. Login
  2. Run command: sudo service mysqld restart
  3. Logout and go

This most recent time it wasn’t as simple as restarting the service because there wasn’t any memory for it. When I used the command:

sudo service mysqld status

I got a message saying, “mysqld dead but subsys locked”. Then I found this really useful post that explained MySQL’s memory issues on Micro Instance of Amazon EC2.

You need to create a swapfile and enable it to give MySQL the memory boost necessary:

dd if=/dev/zero of=/swapfile bs=1M count=1024
mkswap /swapfile
swapon /swapfile
Open /etc/fstab and add this line /swapfile swap swap defaults 0 0

Tutorial: Setting up an SEO link checker bookmarklet

A bookmarklet is a neat little piece of code that you can drop into the bookmark toolbar in your browser that gives you one click access to all kinds of interesting information. I want to show you a quick tutorial for building your own bookmarklet to check the number of backlinks coming into any website that you are current viewing.

First, pick your data source

My absolute favourite thing about being a hacker/coder is that I really don’t have to do any of the hard graft – someone somewhere has already done the difficult stuff, they’ve created the data source and trust me, there is pretty much an API for anything you want on the web. So with these bookmarklets we aren’t really creating new data we are simply shortcutting our way to the information that we want.


Paul Madden is a great person to talk to about the process of taking something manual and automating it. It all starts with a simple objective that ANY human can complete, knit simple objectives together and you have a process. ‘Shake the process’ i.e. make sure that the process works with real people and then when you are confident you can start automating.

Bookmarklets require the same structured way of working – you need to be able to manually access the data and it needs to be quite simple otherwise your bookmarklet will fall down. The other point to mention with bookmarklets is that they need to be one click, one action. If you try to throw too many steps into the bookmarklet it will fall down and you won’t use it.

Our objective for this tutorial is to get a list of the top websites linking to any website that we are currently on. So we need three things:

  1. The data for the top websites
  2. The address of the current website
  3. The criteria for measuring the top websites

Doing it manually

SEOmoz has a website called Opensiteexplorer and they (for free) give you information about the links coming to your website. They also give you information about any website that you add to the search box. Pick a website, for this example I’ll use Seer Interactive. Search for that in Opensiteexplorer and you will see this:

Open Site Explorer - Seer Interactive


Ace, a list of the top inbound links. But wait, these links are individual links and we would like to just get root domains. Opensiteexplorer gives you an option to show ‘linking domains’. If you click on the middle tab you will see:

Open Site Explorer - Root Domains

Now, like I said, you need to be able to do that manually and in a single step because your bookmarklet is not going to be able to click on a tab.

Testing the URL

Grad the URL that is in the address bar when you see the above linking domain data, open up a different browser and paste the address. If you see the exact same information you know that you can get to the data from a single step.

Obviously for this test we knew that we could get that data from OSE, but the principle is the same: navigate yourself to the data, grab the URL and try is somewhere else to make sure that you can access it.

This is always going to be the case, sometimes the URL you paste into another browser gives you an empty screen, sometimes you get the search box rather than the results. This is the difference between GET data and POST data. GET data is the stuff you want because there will always be a specific URL that you can get back to the same information wherever and whenever you want.

Making up the URL

If you take the above example you will see that the URL is this:

The first part of the URL is the domain The last part of the URL is the ? The middle part of the URL /domains is the operator that specifies that you want to see top domains. Click one of the other tabs and that will change:

  • links will show all links
  • anchors will show top links text
  • pages will show top pages

Quick start to using Javascript

Javascript is super easy to use, and because it is client side (based in the browser not on a server) you can start playing around with it straight away. Try this: Open up Chrome, navigate to any website, right click and choose ‘inspect element’. You will see a box open in the bottom of the screen and there will be a tab called ‘console’. This gives you direct and live access to Javascript functions. Type the following into the console window and hit enter:

alert("Hello World")

You will see an alert window pop up saying “Hello” and then you can click OK to close it.

Try this one:


Now things start to get interesting – you should see the root domain of the website you can currently on, so if your on Google an alert box will pop up saying “”. Now try this again but do it on this blog post and change the code slightly:


You will see this:

Screen of Is it a blog? alert box

Turning the URL into something automated

We now have a way of grabbing the address of the current page you are on so with a bit of automation and code we can make the bookmarklet grab that address and place into the end of the URL we know gives the data we want from Opensiteexplorer.

The code to do that is:


Try that in the console for this page and you will this:

Alert box screen

That’s great – that exactly the same as the URL you would paste into a browser window to get to the data!

Making the bookmarklet

Now that you’ve done that stuff above it’s super easy to create the bookmarklet. What you need is to take the URL that is being displayed in the alert box but instead of just displaying it you want the browser to action it i.e. actually take you to the URL.

This is actually really simple code as well in Javascript:


Simply, you are saying “location.href” (the browser’s URL) needs to equal “” (the Opensiteexplorer website) with the “domains operator” (the top websites tab) “+” the current “location.href” (the browser’s URL).

Once you have this working (try it in console like all your other Javascript) you can simply right click on the bookmarks toolbar in the browser and choose “add bookmark”. Give the bookmarklet a name that will remind you what it does and type “javascript:” and then without a space the above code.

Final step

Done! Now whenever you click on the bookmarklet you will be redirected to Opensiteexplorer and it will show you a list the top websites linking to the URL you were browsing.


Application Craft

Mobile coding disruption is here: Application Craft!

Application Craft is a 100% in-browser, coding and deployment environment for all kinds of native mobile applications.

- Watch the video

Application CraftCreating mobile apps in a single, easy to use environment has long been a wish of mine. I am of the opinion that native application coding has a limited shelf life. I could never get my head around the idea of learning an entirely new coding language like Apple’s xCode, becoming advanced enough to build an application and then deploying it on iPhone. Then having to start all over again for Android, and again for Blackberry and then the obligatory web version as well! Not to mention that you then got to try and maintain all the various coding implementations of your application.

So the obvious answer for me was to push for all-in-one solution: that could either be the output language i.e. what finally ends up in the front of the user; or it could be the input language i.e. the stuff that the coder creates that can then be rendered in the relevant way for the device.

Below is a video from founder Freddie May discussing the features of Application Craft and how this platform might just be the mobile coding disruption I’ve been waiting for:

The video is a little long, but stick with it if you are interested in mobile application coding because I think this could well be to mobile what WordPress was to CMS deployment.

What I learnt from Ross Hudgens about the importance of an active community.

Ross HudgensGoogle Penguin and Google Panda, the latest updates from Google are changing how they measure who gets ranked highly and who gets pushed down to invisibility. These updates look into two areas of content marketing: the quality of the content you create and the quality of the links that you get.

In theory at least that sounds like a great thing. The problem is that these algorithmic updates are also knocking down perfectly good quality websites at the same time as the poor ones.

I was recently reading a story from @rosshudgens about WPMU and how they managed to beat the PenguinWPMU is a website providing tutorials and resources for WordPress websites. There most popular export is WordPress themes, and of course those themes have a credit at the bottom saying, “powered by WordPress MU”. They lost around 65% of their search traffic overnight, and they had no idea what to do.

WPMU drops 65% of search traffic overnight

This is not an isolated situation – stuff is changing on the internet all the time and it can be very difficult to keep up with everything and get your voice heard by the big boys like Google.

For WPMU the SEO community, and their collective voice, amplified the problem that WPMU found themselves in. Tweets from SEO notaries like Rand Fishkin and website WebProNews show this amplification.

The owner of WPMU reached through blogs to people in the community, and gained the help of great SEO Ross Hudgens (who I believe offered his help for free). The owner also reached out to Australian newspaper Sydney Morning Herald and in doing so amplified his story enough that Matt Cutts responded.

Without all this help: the advice, personal consultancy and the collective voice of the community WPMU would probably still be languishing low in the rankings somewhere. It was because WPMU were active in the community themselves they gained the collective strength of the community.

This wouldn’t have happened had they not built up real-world relationships and invested time in fostering the community beforehand.

The message of this story: get involved and pay it forward, help now because in the future you might need the help yourself.

Samsung in major blogger relations fail

It’s seems like a daily issue – some company somewhere has messed up and they are getting panned on social media.

Most of the time these issues are significant for the brand, like this customer service issue or the community upset by poor use of language, but rarely though do these issues reach further into real life.

Right now Samsung are involved in a social media crisis of different proportions, where independent bloggers were picked up and (literally) dropped when they didn’t do the company’s bidding.

Here’s the story: Samsung brought a couple of bloggers over from India to the IFA conference in Berlin. Those bloggers, believing that they were getting transported as a way to preview and review Samsung’s latest mobile offering started to question the intensions of the company when they were measured for uniforms and told to occupy the phone makers stand to show-off the new Samsung products!

Repeatedly Jeff (one of the bloggers in question) told Samsung staff that he was not here to show off the phone, he was there to review the phone. The story on Next Web states that Samsung India’s PR was soon on the line telling the bloggers that they should do what they were told, otherwise their flights would be cancelled and they would have to find their own way back.

If all this is true, these types of threats are not acceptable and it leaves me thinking, if they can do this to independent bloggers I tremble to think to what would happen to employees caught with the dreaded iPhone!

And what of the blogger (Samsung did cancel their flights)? Well, Nokia offered to take care of their flights and hotels ensuring that they could stay in Berlin and cover the remainder of the conference.

This post is cross posted on the Freestyle Interactive blog.


Getting started with the SEOmoz API PHP library

I’ve just started working with the SEOmoz API PHP library which provides a group of classes for accessing the great SEOmoz API.

The library has a range of different languages available for use: Java, Python, etc, but I’m a PHP guy so I thought I would walk you through getting started with the PHP version of the library.

BTW: If you know what your doing and you just want to remove the “string(155)URL” section of the output its here

What is the SEOmoz API?

SEOmoz is a search engine optimization crawler and analysis software for improving your rankings in the search engines. Their web tool provides a web interface for getting at their crawl data but for more heavyweight tasks, and for integration into third-party applications you will need to use the API. Below is a library for quick and easy access to the API.

Step 1 – Download the source

You can download the complete source from here. This download provides all language versions. The best thing to do is to just pull out the folder for the language you are working with. In the case of PHP the folder structure is SEOMozSamples - PHP - complete. Inside the ‘complete’ folder you will see all the files and folders you will need to work with the library.

Step 2 – The included files

It is important to take a minute to understand all the files within this library. The ‘constants’ and ‘util’ folders are both utility directories and you won’t really have any need to edit them. The ‘authentication’ directory does the heavy lifting when it comes to getting at the SEOmoz API in the first place and the ‘service’ dir contains the files for calling SEOmoz API services.

The way that the library is setup  is that everything is routed through ‘bootstrap.php’ but unlike other libraries you don’t actually edit any master conf files. All the stuff you need to edit comes from the ‘example’ dir and in this case the sample.php file.

Step 3 – Getting the credentials

Login to SEOmoz API section with your current SEOmoz credentials. There you will see a section for your access key:

SEOmoz API creds

The screen for the SEOmoz API credentials

Copy the access ID and your secret key, then paste them into the ‘sample.php’ file within the example directory along with the URL that you want to get Moz data back on:

Editing the Sample.php file

The sample.php file that you need to edit with your SEOmoz API credentials

As you can see, you are editing the ‘sample.php’ file and not a master conf file or anything in the authentication folder. To make a larger project easier to work with you would probably want to put these details in another file and reference them from your script, but for this guide they are fine where they are.

Step 4 – Running a query

Once you have completed the above steps you are ready to run your first query. The simplest way to run a query is to move through the ‘sample.php’ and uncomment one of the service requests.

Editing the Sample.php file

This part of the sample.php file shows you the functions in play to call a query.

The ‘sample.php’ file shows all of the services that are available, but whether you can access them depends on if you are using the free version of the API or the paid version.

The best function to use when you are getting started is ‘URLMetricsService’ (see above screenshot). Simply, what is happening above is:

  1. A new ‘authenticator’ class is being called meaning that your API credentials from the top of the script can be used.
  2. A new ‘service’ is being called, in this case the ‘URLMetricsService’ but if you look in the service directory within the download you can see all of the services available.
  3. The new service is going to use the new authenticator in order to gain access to the API and the service is going to be stored in the ‘$response’ variable.
  4. We print out the ‘$response’ variable to show the output from the API.

Once you have edited your ‘sample.php’ file to include the above go to the file in your Web Browser (if your working locally you’ll need WAMP or a similar server).

You will see an output like this:

The output from the Sample.php file

This is the output from the sample.php file using the above query

Step 5 – Removing the string element from the SEOmoz API

I think it is a little clumsy of an otherwise great library that the URL element is in the output. As you can see it is in a different font from the actually API output. That is because the output from the API is using ‘<pre>’ formatting as stipulated in the ‘print_r’ code above, but the URL element isn’t coming from there.

What make the coding clumsy is that the URL string appears on any query so if you want to use this library in any sort of productive way you need to get rid of this URL element.

This is actually quite a simple thing (albeit took me some considerable research time because couldn’t find an answer on Google). Head over to the ‘service’ folder and open the ‘URLMetricsService.php’ file. In there you will see the offending code:

URLMetricsService before

The var_dump function is showing the string element.

In order to get rid of this just comment it out:

URLMetricsService after

Comment out the var_dump function and all is fixed.

Then when you run the script you will no longer see the string element:

Sample.php output

The sample.php output without the string element.

Step 6 – Just showing the fields you want

The syntax of this is quite simple. Say you want to show page authority and number of links. Head over to the API documentation (Plea: Can you SEOmoz guys integrate this into site, at the minute it is so difficult to navigate?) and look through the response code table. The response codes you need are upa (page authority) and uid (number of links). Now it’s simply a case of calling them in your code:

print 'Page Authority: ' . $response->upa . '<br />';
print 'Links: ' . $response->uid . '<br />';

If you were using the above example you would then see:

Page Authority: 100
Links: 22

Here is the complete code for the above example:

Sample.php with the complete example code

This is the sample.php code for the complete example.

There you have it, a very simple ‘getting started’ guide to using the SEOmoz API PHP library. Give me a shout on Twitter if you have any questions!


The 4 most important server codes for SEO

Sounds like a load of technical gibberish, but it is very important for SEO.

What is a server response code?

At its most basic level it’s the first thing that your computer sees when you go to a website. A server response code is the piece of information that your browser uses to carry on into the website. If everything is working properly with the server you will see a webpage, in order to see the webpage though the server has to reply to your request with a 200 code.

If it responds with a 500, for example, there is an internal server error and you can’t get to anything on the server.

Are there really 500 different server response codes?

No, the 3xx, 4xx and 5xx are just families of codes. 3xx codes all relate to redirects of one kind or another and 4xx all relate to pages that have been requested but are not available.

301 or 302

301 is the server response code to tell a client (your web browser or a search engine) that the page it is looking for has been permanently moved to a new location. For example, 301 [to] means this page: has been forever moved to this page:

302 works in the same way but only temporarily. You can see now why it is important to get the right code. If you tell the search engine you’ve permanently moved a page they will transfer all the links, equity and search engine rankings. Whereas if your redirect is temporary (302) they won’t move any of that stuff and you won’t maintain your rankings – the old page will in essence wither and die, making room for your competitor to take over.

404 and 410

This is a less common differentiation than the 301, 302 issue above. In the past all pages that couldn’t be found on a server brought back a 404 server error. Website owners setup 404 template pages and as a catchall and it worked fine. Now Google have released information to say that in light of their Panda update website owners shouldn’t simply remove pages and let them 404 they should include a 410 response when removing pages.

410 response is (as with 301) a permanent message to the search engines that a page has been removed and is not coming back. This means that Google can ‘clean’ it from their index and you can show that you are maintaining an active website that will not be penalised by loads of poor quality content alerts from Google Panda.

If you use 404 then you aren’t showing these things, and whilst it may not make or break your website, it could provide the casting vote later down the line if Google see other things with your website they don’t like.

Newgle logo

Writing Newgle in 24 hours

Just completed in my 24 hour coding challenge – this is a mashup of Google custom search results with some domain filtering and prioritising. Here is how the 24 hours went:

Hour 1

What the fuck is the idea? I want to do something with the search results but what could that be – what about setting up a custom search engine that just shows people instead of results? After a couple of rubbish ideas like that I came up with newgle!

Hour 2

Getting to grips with Google Custom Search is actually very easy. I had to make a decision pretty early on whether to use the API or the web interface. The API offers considerably more functionality but also requires significant investment time to get it up and running properly. The web interface provided by Google is great as a quick start wizard – in all honestly there is a surprising amount of functionality you can manipulate so unless you have a very specific need I’m sure you will get what you need out of the web interface.

Hour 3 to 8

Google custom search allows you to specify particular websites that you want to include in the search results, it also always for exclusion of websites – so I spent a lot of time filtering the custom search engines to get the right balance of websites for each of the summary, detail and person sections.

Hour 9 to 12

Visually, what should it look like? This wasn’t a difficult question because I was basing a lot of it on the current Google UI – interestingly there was no homepage for newgle until about the 15th hour everything was from the search page – I didn’t see the need for a homepage until I decided to give it it’s own website.

Hour 13 to 20

This was the witching hours of the project, the time when it gets to the middle of the night and you’ve been staring at code from the previous dusk-til-dawn, and you question WHAT THE HELL AM I DOING THIS FOR, IS IT ANY GOOD? sometimes the answer turns out to be ‘no’ and then I go to bed. But with Newgle I wasn’t trying to re-invent the wheel I was simply adding a ‘better’ UI to what is already a pretty perfect system.

This realisation drove me forward and these hours were spent adding the best user experience possible to the program. These UX things included a jquery persisant header, a tutorial tooltip and onclick highlighting!

Hour 21 to 23

Testing… testing… and more testing… I hate testing!!

Hour 24

Shit, how am I going to know if anyone is using it? The last hour was the tightest because I needed to add Analytics code and some event tracking to the Javascript. I wanted to track visitors, obviously, but I was also looking to track engagement through the system. So I setup some event tracking that in the future I will be able to use to measure which results: summary, detail, person, are the more used.

Here is the finished piece:

Screenshot of


Newgle logo


Europes data reform may stop a UK Facebook/Twitter

It will be difficult for the UK to be home to the next Twitter or Facebook, because of the “chilling” effects of Europe’s proposed changes to data protection laws, claim industry representatives.

Industry trade bodies, including the Internet Advertising Bureau (IAB); Interactive Media in Retail Group (IMRG); Coalition for a Digital Economy (Coadec); the Federation of Small Businesses (FSB), and the Direct Marketing Association (DMA), have signed an open letter to ministers, warning them that the EC’s proposals to clamp down on data violations would hamper growth of the digital industry in the UK.

Full Story from Brand Republic News.