What we Learned from 1,000+ Tech SEOs About Tech Priorities

Published on December 16, 2019 by Chris Green

The Tech SEO Audit has been a bit of a milestone piece of work in our industry for a while – it’s often the first piece of paid work done on a website (if you’re an agency or freelance) or it at least introduces a new phase in a website’s lifecycle.

The importance of an audit and the document which follows is one which many in the industry feel strongly about – myself included – but unsurprisingly, there’s no standardised way of forming/delivering these.

I’ve read a lot of audit documents in my time, those produced by members of teams I’ve worked in, managed, and produced by external agencies/consultants; if I’m ever likely to disagree with anything I find in those documents, it is the priorities attached to specific fixes/actions.

The issue (as I see it) is always around how effective fixes/actions would actually be. I.e. what benefit would you see from implementing these actions?

Put from a client/project stakeholder perspective – “what do you want me to spend money on, will it work?”.

The Poll

The question is simple really, how does the industry select the high priority recommendations after an audit? Getting an effective sample of feeling is always difficult, but giving how vibrant the SEO community is on Twitter, it felt like a great place to start the conversation.

The poll consisted of three parts (see thread embedded below for full), but the main question was as follows.

“How do you prioritise SEO recommendations after an Audit?

I hear a lot of different takes on this one, (and feel very strongly about this) but I’d love to hear how you manage this in your audits.

A “high” priority …”
– is most technically wrong
– drives results fast
– is a combo of the two
– Other

If I could re-run this, I’d do it on a different platform which would let me provide a little more detail & rework the question itself (more below), but it seemed pretty straightforward.

The Poll Results

The results were pretty conclusive, a strong 42% of the 1,061 votes said they would mark recommendations as a “high” priority using a combination of the most technically wrong AND those which drive results fast.

Maybe not surprising that the majority of people wanted to ensure they fix problems AND get results. That makes sense. In fact, if you merge the “combination” response and “drives results fast” that’s 79% of the respondents. So nearly 80% of tech SEOs want to ensure those fixes they prioritise as “high”.

That’s good right?

Cue “it depends”

A cliche within the SEO industry now, but trying to get a straight answer out of a Tech SEO has always been tough. Largely because caution is something a lot of SEOs operate with, but partially because we don’t actually know the answer.

Context around an issue/recommendation is so important, “it depends” as a phrase could/should be replaced with “give me more detail” as much as it could be “I don’t know”.

This is where I find the conversations around the poll were more valuable than the result itself. We got the opportunity to see how many Tech SEOs prioritise their own recommendations in audits, and some learnings we could all benefit from.

Audit Priority Systems

One of the things which became abundantly clear as part of the conversation was that many SEOs in the space have their own systems for ranking recommendations based on impact and other features. I have worked with similar myself, but seeing how others do it was fascinating & really helpful.

PIE Score – Stephen Kenwright

Priority, Impact & Effort (PIE) is a system Stephen has spoken about before – and whilst there were a number of others who suggested similar systems, Stephen introduced a numerical scoring element to truly help understand the degree of importance.

For each of the categories you assign a value from 0-10 (10 being most significant) and then divide the score by three to determine the average – the PIE score.

When run across each of the recommendations this would give the ability to sort via PIE score and then pass across to the team who is going to be implementing them.

I asked Stephen to provide some further detail as to how scores against importance to help us see where the score originates from:

“To put a numerical value against importance, number the business’ priorities from 1 to 10 (with 10 being highest). For example, the business’ biggest priority might be to launch in a particular new territory by X date, so SEO actions that move the business towards that will have a high priority. 

Something mission critical (hreflang? Platform can’t go live without a particular canonical setup?) would be given a 10. The business has other priorities too – and making more money will always be up the list – so revenue driving change will usually score well.

Assigning potential usually comes with experience, or with previous case studies. I often reach out to someone who’s done what I’m trying to do for a chat and ask them which actions they took that did the most good, e.g. a brand I’m working with might be looking to understand if they should implement AMP and I’ve never done that, I’ll ask someone who has done it what impact it had on site speed, visibility, traffic, conversion. If the answer is “not very much” I’ll score it low and score other site speed changes higher.

PIE scoring is not in itself a business case most of the time – just a way of understanding what you should spend your time pushing through.”

That final part “scoring is not in itself a business case” is pretty critical here it seems – that’ll need more thought depending on who you need to get past to get your recommendations onto the dev roadmap.

A noteworthy mention here for Alexander Außermayr who’s Priority, Effort & Impact system (PIE in effect) is a stripped down version of the one Stephen discusses.

In removing the need to add scores in each category, he removes a degree of complexity which could help in some circumstances.

Impact/Difficulty Scale – Aleyda Solis

Aleyda’s method is – like Stephen’s – something that’s been covered before, but many appreciated the simplicity and elegance behind her thinking.

Quite simply, plot on a chart the Impact on the Y Axis & Difficulty on the X – see below for an example. Anything in the top left (high impact, low difficulty) is HIGH priority. Conversely anything along the X axis (either low impact or high difficulty) is deprioritised altogether. Simple!

As with Stephen & PIE Score, My main question to Aleyda was on how she determines the impact rating – as this is often one of the trickiest parts of the equation:

“How I determine potential impact is to prioritize SEO recommendations (along with difficulty), using the following criteria from the areas/pages/queries: 

  • The search popularity/volume of the queries that the pages with the issues target –  giving more importance/higher priority to those that target more popular/searched for queries.
  • The role of the pages in the customer/conversion journey of the site, giving more priority to those that play a direct role in the conversion process (eg: for an ecommerce/transactional site, optimizing category or products pages will tend to have a higher direct criticality towards the business vs. the help/ support/ blog/ non-transactional ones). The same with main categories vs. sub-categories or faceted pages.
  • The criticality of the specific SEO opportunity towards the page’s rankings: If it’s a fundamental configuration issue which current optimization is “low” – like duplicate title tags that don’t even include specific targeted terms per page – vs. “nice to have, but not direct influence or so critical towards rankings” – like structured data implementation – I will tend to give highest priority to those more fundamental issues, that have a low level of optimization, which will likely have a higher effect in rankings if fixed.

So, it depends a lot on the context of the optimization findings too and the areas where they will have an effect, understanding of their targeted queries behavior, rankings vs. competitors, besides their role in the conversion journey.”

Context of the problem & how it interlinks with everything else appears to be key – you will struggle to know the impact without understanding their place in the wider whole.

Difficulty, Impact + Roadmap – Ric Rodriquez

Ric’s method sounds a lot like Aleyda’s & Stephen’s, except he’s more explicitly addressing the current IT roadmap as a bit influencer in the process.

Looking at the IT roadmap is another element to difficulty/ease of implementation, but in breaking it out as its own element it helps to remind us all of something really important – the roadmap wrecking ball often doesn’t care what your audit says.

Going perhaps one step further was Andrew @Optimisey in one of my favourite responses, his input is a far more strategic move with the client’s C suit in mind.

What feels particularly smart here is that by getting more buy-in with early progress it makes the tougher stuff much easier to get done.

Going Beyond the Audit Process

It’s hard to understate the importance of thinking beyond the audit when writing up your recommendations. Maria, Head of Technical SEO @OnelyCom even took to charting her steps before the audit even starts:

Whilst this is another facet of the journey to understand the easy/complexities around implementation of audit recommendations, running these kind of checks before you start can really ensure the audit is quicker/more effective.

I asked if she could expand on this one further:

“My approach is determined by the fact that I work at an SEO Agency (not in-house), so every time we prepare an analysis, we know nothing about the site. Analyzing issues and preparing recommendations without having a detailed plan is wasting the resources of both ours and our clients.

Before starting the main audit we collect the sample data, review the structure, find general problem areas and access their scale. As a result, we can create a plan and assign risks & opportunities to a given area. We discuss the plan with our client and set priorities together, then start a progressive analysis of every problem area we identified during the initial audit.

It’s like looking at a long corridor with 24 closed doors. We don’t know what’s behind them, but we can open them slightly to see inside and then decide which rooms should be cleaned first.

If Maria added some food for thought at the start of the process, Ruth provided some sage words for the end.

Not only should we be ready to answer the tough ROI questions when we deliver the audit, but we should be building in post-delivery consultation into the mix in order to ensure planned activities are more likely to go ahead as planned.

Conclusions – How Do We Ensure We Prioritise the Right Fixes?

There were no significantly different views stemming from the conversation – all variations of a similar method.

The poll questions itself, whilst receiving a great response, wasn’t the most scientific. There’s a strong chance that there’s a bias In the answers based options for the answers.

This result, whilst overwhelming doesn’t “feel” right. Do my feelings really count here? No, not really. But the disconnect is that the premise on the poll was, essentially “we – as an industry – struggle to prioritise what’s right”, however, nearly 80% of respondents (around 800 people) say they have the intention to.

Myself & Simon clearly have some shared experiences with poor audit recommendations.

The two conclusions I can draw from this, either;

  1. My initial feeling wasn’t right
  2. Wanting to provide the right recommendations & actually doing it is really hard.

Maybe both are right (quite likely), but actually #2 is the nub of the problem – knowing the impact of a recommendation is really hard to be sure of.

Context appears to be the #1 factor from the vast majority of the feedback I received, issues feed into others and amply/rule out each other in various ways. Intuition and experience appear to be the main tools for effective prioritisation of tech recommendations – but is there any way of removing the element of subjectivity from this?

I’m planning a follow-up to this which instead looks at how we all might approach the prioritisation process itself in a more formalised way, but until then if you have any thoughts/feelings on this post or the original poll itself, please get in touch via twitter or the Contact Us page.