App referral traffic to your web site and how to track it

Measuring referrals from mobile and tablet apps to web sites is extremely difficult – actually, its practically impossible. Over the past few years you will have seen traffic from direct/typed/bookmarked sources increase steadily as app usage has increased. Unfortunately this is not because your web site has become a destination for your chosen content, but instead its because your analytics platforms are unable to attribute traffic to apps.

I am specifically finding traffic from social media apps, so Facebook and Twitter, the hardest to track down. There are techniques that I discuss below that can help you to track content that you post yourself, but unfortunately this doesn’t help with organic sharing.

While Facebook Insights for domains does give you some level of overall referrer information, it does not breakdown the traffic between desktop and mobile.

This post explains why your analytics package is currently unable to track this traffic and tries to find some solutions to help you to make sense of it all.

So how do analytics platforms track referrals?

Most analytics packages use header information contained in your user’s web browser to determine which site the user had been to previously to visiting your web site. This header information only appears if a user clicks on a link to your site. For example, if I am on web site A, click on a link to web site B, then the header information would show that the referrer was web site A.

Just so we are clear, if I am on web site C, then type web site D’s domain name into my web browser, then there would be no referrer and the traffic source would be Direct.

No referrer in the header

This latter example explains why measuring app traffic is extremely difficult; an app is not a web site, it is not viewed in a web browser and they do not contain header information when linking out to a web site.

There are two different techniques that apps can use to open up web content:

  1. Open the content in a native mobile or tablet browser – such as Safari, Chrome, Android Browser, Opera etc
  2. Open the content in an in-app version of the native browser

In both cases, because the app is opening up a new instance of a web browser, whether its the native browser app or the in-app version of the browser app, there is no referrer in the headers. So when your analytics package is seeing the user visiting your web site, it is seeing that there is no referral and  it will deem the traffic to have no source and therefore assign that referral as direct.

So how can we track app referral traffic?

There are a couple of things that we can do as content creators to get around this:

Google’s UTM Tracking: https://support.google.com/analytics/answer/1033867?hl=en

If you use Google Analytics, you can use the built in UTM tracker to track links that you post to other web sites or platforms. Quite a few URL shortening services and sharing plaforms such as Buffer, Bit.ly and Owl.ly allow you to add these dynamically so you don’t need to keep adding the tracking manually.

When a user visits your site using a UTM tracked link, this will override what is contained in the web browser’s header and attribute the source of the traffic to relevant keyword that you add to the tracking. You need to be very careful here as I have seen instances where these have been setup incorrectly and links posted to Facebook with Twitter as the source have been shared which makes the data inconsistent and unreliable.

One thing to note on this, is to only use this method for external, inbound site links. You should not use these links on internal links as they automatically start a new site visit – meaning that you will see a spike in visits whenever someone clicks on your tracked links.

Using a similar service for other analytics platforms:

Many other analytics platforms have a similar method for campaign tracking. While these have been primarily used for more traditional marketing campaign tracking, there is no reason why you cannot do use this same method for the sharing of content.

Limitations

This would only work for content that you share yourself. There is no way to enforce this for any organic shares – so when a user copies and pastes a url from your web sites into a social status update for example.

Also, as you cannot post content to a specific device or platform, you cannot differentiate easily between a social media’s desktop, mobile web or mobile app experience. So while you may get closer to tracking Facebook traffic, you will still not know the make up of desktop, mobile web or mobile app referrals.

 

Mobile Native Browsers vs In-App Browsers

I was surprised to find that Adobe Analytics (Adobe SiteCatalyst or Omniture, depending on how far back you go) doesn’t allow you to report on mobile browsers. Even though, I’ve been using Adobe SiteCatalyst for many years and I had always assumed that the ability was there. I am actually a little ashamed that I hadn’t noticed this before.

SiteCatalyst Device Type with Browser Breakdown
SiteCatalyst Device Type with Browser Breakdown

The main reason for me to look at this was that a colleague sent me a link about Embedded Mobile Browser Usage on http://www.lukew.com. While the blog post is more to do with how developers should be testing on in-app versions of mobile browsers as well as native browsers. This is mainly due to the in-app versions of Safari and the native Android browsers having less functionality and lower rendering powers. Luke explains that the in-app version of Safari for example does not include the Nero Javascript engine, certain Safari caches, or the “asynchronous” rendering mode. Meaning that web sites could appear to load more slowly, contain errors etc that would normally not be detected if testing on the native versions.

As Facebook and Twitter app usage continues to grow, its going to become more and more important to understand the differences between these different browser types and how users browse your sites when using these apps. In addition, when users click on your content from apps such as Facebook and Twitter, they appear as either Direct or Typed/Bookmarked depending on which analytics package you use and whether you are using any form of campaign tracking or not.

When I realised that Adobe Analytics does not track mobile device browsers, I turned to our implementation of Google Analytics. Fortunately, GA does track in-app browsers on iOS – so you can see data for Safari and Safari (in-app).

What I have found in my initial dive into this is that users browsing our sites using an in-app verison of the native browser consume less content (in terms of pages per visit), has a higher bounce rate and are on the site for about half the time than the native version.

In-app browsers vs native mobile browsers
In-app browsers vs native mobile browsers

What is going to be hard to work out is whether this is due to a behavioural or technical motivation.

If the site renders slowly, contains certain errors, or is not fully optimsied for the in-app experience, you could attribute some of the reduction of usage to this rationale.

In terms of behavioural, you need to consider what users are doing. If I am on my Facebook app on my phone, its probably while I am in the middle of doing something else. Potentially on the toilet, in a meeting (hopefully my boss is not reading this) or in transit therefore I have limited time to view.

Also, to move this point further, if I am in my Facebook app, I’m also interested in seeing what other content my friends have posted. So I might just read the article that a friend posted, then click back in the app so I go back to my feed and continue my Facebook journey.

One of the main things that I tell colleagues at my job is that Analytics is great at telling you what happened. Trying to find the reasons behind a person’s behaviour is not generally achievable. You can find commonalities between certain types of users, segment them and try to implement changes to convert users to behave in the same way. But in terms of understanding the drivers for users to act in a particlular way is not something that I believe Analytics will answer.

You need to be talking to your users and understanding why they are doing what they are doing – whether it be done through social media, email or surveys. You really need to have all the information to make decisions and I believe that Analytics is only one part of it in this case.

How Google’s secure search is hurting analytics

It has been happening since 2011 but ever since Google introduced the ability to use their search engine over a secure (SSL/https) connection, the lack of visibility of search keywords has been steadily increasing culminating with a predicted 70% of search keywords not being reported in September 2013, according to this blog post about Google Query Data Disappearing at an Alarming Rate on the RKG Blog.

This has increased from 43% only as far back as July 2013. So why is this happening and what impact is it having on analytics?

Google has been moving users to the SSL version of their secure search result pages to ensure the privacy of their users and ensuring that prying eyes cannot listen in on what users are searching for. They seem to have stepped up this tactic ever since the public has been made aware of various hacks and leaks to WikiLeaks.

The impact this is having on anlytics is that when a user searches for something over a secure connection, the keyword(s) that the user searched for is removed from the referral string. Meaning that a referral is detected from a search engine, but as the keyword has been removed would be logged as Not Available (or equivalent for the relevant analytics package).

As mentioned previously, it is assumed that up to 70% of search terms will be affected by this by the end of September 2013. The huge recent increase of these secure searches has come from Internet Explorer who from IE7 onwards now only supports the secure verison of Google’s search results.

Suddenly trying to understand the impact of any SEO improvements you are making to your site will not be able to be tracked effectively using your standard analytics packages.

One potential way of getting insight for SEO purposes could be to create an Advanced Segment that sets the referrer type to Search Engines and where the keyword is unavailable. Once you apply this segment, you can then look at top landing pages and understand the performance of your top landing pages. You could start to trend this over time and validate any changes this way.

This issue affects all analytics packages, including Google Analytics.

Suddenly, SEO just got a lot harder for everyone. Ironic really as Google is trying to ensure that they eliminate what they term as ‘Search Spam’ from their search results pages – see this video from Matt Cutts – but they are not allowing website owners to see how these changes affect them and how to best optimise their sites through data.

It feels like they are giving with one hand and taking with the other.

Prioritising a backlog doesn’t need to be rocket science!

The common component in most software development agile practises is the backlog – a list of user stories, often written on cards, that represent a feature or a piece of work that needs to be completed by a member of the development team.

This backlog of work needs to be prioritised – and I have seen different people do this in different ways.

I’ve seen these items prioritised individually in excel by a single individual, where many different factors have been scored and a calculation applied to give a final score where weightings have been applied. This took a lot of time and a lot of toing and froing – all the while the development team were working on lower value items. Made no sense to me.

This list is then ordered largest value to smallest, resulting in one person’s opinion on what his list should be. This did not take into account any development team member’s experience, as well as any other teams that may be impacted by the work.

Any new items would need to be scored in this fashion, debated extensively and then slotted in to the backlog.

As far as I am concerned, this is not the way to prioritise work – especially in an agile environment.

A big part of agile is collaboration, responding to change and having working software at the end of each release. Why not apply these five principles to prioritisation.

Get everyone together

If someone is involved or impacted by the functionality being discussed, get them involved in the prioritisation – within reason obviously. At least have each department represented and ensure that there is one mediator and someone who can make a final decision if there is a disagreement.

Understand the goals

Set out at the start what the strategy is for your product and set out the objectives. Each user story or card needs to constantly compared to these objectives and ranked according to how far it goes to achieve these goals.

Relative priority

Keep things simple. Select a starting point buy placing a card in the middle of a table. Discuss this card and ensure that all involved understands exactly what this card represents and how it helps to achieve your product’s strategy and objectives.

Select the next card, discuss what it is and explore it’s benefits, risks, amount of dev work etc and then place it above the card if it is more important or below this card if it is less important.

Keep doing this with each card until you have a fully prioritised backlog. If its a big backlog, then you may need to split this into a couple of sessions.

Be flexible

Through discussion, you may find that items are no longer required or require more definition or to be defined. You can do this in the meeting itself or separately.

Future proof

Once finished, when new items come up, they can be easily prioritised as all members of the team understands the backlog and how it is made up. You place the new feature in the backlog relative to everything else.

In my experience, this is by far the best way to prioritise work. Having items prioritised relatively compared to other work get’s the business to understand that they cannot have everything all at once and that a decision has to be made as to which is more important than the other. You do not have a spreadsheet that gets ignored with multiple items with the same value.

Items can be redefined as you go and the team as a whole has been included and feel valued. The whole team has a voice. This leads to highly motivated team who have bought into the product and what is trying to be achieved.

YouView opens up it’s trial to the public

YouView, a joint venture between BBC, ITV, C4, C5, BT and TalkTalk, has today opened up its preliminary trial to the public. If you are interested, you can register via this special Trials web site: https://trials.youview.com/login

Here is some more information from the YouView site to better explain what YouView is:

A little while ago, some of the UK’s best-known Broadcasters and Broadband Internet companies got together to talk about TV and how they could make it better.

They agreed that while everyone loves great TV, it’s frustrating when you miss a great show – and it’s never quite the same on a computer. Wouldn’t it be great if you could combine the simplicity and value of Freeview with the choice and convenience of catch-up and on-demand services – all on your own TV?

So they decided to create YouView to do just that – all together in the same set top box with no fuss, no computer and, best of all, no TV contract.

You get all your favourite Freeview or Freesat channels, plus the last seven days’ catch-up TV, and for those who want even more, the choice of on demand and pay TV – like films, sports and US drama.

Our unique programme guide will go back in time, so you can watch shows from last night or last week today. You’ll also be able to search for things by title or even an actor’s name.

And that’s just the beginning.

Because YouView is open, developers can create apps that can make your TV do all sorts of things. And all kinds of providers will be able to put their content on YouView. As YouView develops there’ll be more and more content available from the Internet as well as lots of ways to personalise the way you watch.

All this with no monthly TV subscription. You’ll just need a YouView box and broadband Internet.

Brilliantly simple, and simply brilliant. YouView will be everything TV should be.

What is Validated Learning and how can it be applied to Kanban?

I’ve been reading the Lean Startup by Eric Ries and he has introduced me to the concept of Validated Learning. Validated Learning is the practise of effectively measuring the accuracy of assumptions and using the results of the Validation to understand whether the assumption was correct and if so, continue onto the next test. If not, decide whether your strategy, assumption or feature needs to be improved or to change direction.

The main thing that I found interesting was an example of how this could be integrated with Kanban, a practise that I am an avid user of since launching Mousebreaker.com in 2008.

The example spoke of the founder of Grockit, an online teaching site that enables students to learn either socially with other students online or individually. They assumed that a user story was not actually delivered until it had been confirmed as a success through measurement of Validated Learning.

In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

If it is found that a feature has not been a success and actually improved the product, then that product is removed. In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

In the example, Grockit uses A/B Testing and cohort analysis to validate the success of the feature being delivered by the development team.

A/B testing allows you to test different versions of pages or parts of pages to different proportions of your users to test a metric such as registrations or visits to sections of your site – although you can measure much more than this. You can then determine which version works best and then make the winner available for all users.

Cohort analysis is very similar but takes place a step before the A/B Testing. You may use a number of different marketing channels using a number of different messages, for example. Each user that comes from one type of channel/message would be associated with that channel/message and then tracked throughout their lifetime on your product. This would allow you to understand the cohort group’s lifetime value for example, so that you can understand the value of those messages or that channel.

You can use each of these methods exclusively or together and measure against the metrics that matter to your business. What you are looking for is actionable data, something that up you can make a decision on or understand the value of a change.

I really like this concept; however it does seem to go against the basic principles of Kanban. Kanban is an agile methodology that is used, partially, to eliminate waste during the build process of a product through practises such as limiting work in progress and kaizen – the constant improvement of the process.

My question is whether removing a feature that is not deemed a success as wastage; and if it is, could that have been avoided?

Whenever I have built a web site and we have launched a new feature, we have measured and iterated based on the data collected until it has worked – a form of validated learning. Only, of a feature has not worked, I have seldom, if ever, removed this feature. My main thought on this is that it would have been a waste of the time to develop the user story.

Perhaps I would have removed the feature if it had a detrimental effect on the metrics used to validate the functionality.

I would be happy to do this as long as it informed decision making in the future. This is what Eric Ries is arguing. A user story or feature that is being developed is actually a test of a theory or an assumption. You are assuming that this new feature will improve the product and will improve a core measurement to your business.

If it does not do this, why keep this feature live? If it does not deliver enough improvement, then why keep it live? As long as these lessons learned helps to change the strategy or assumptions around your business, you should be able to reduce wastage as your strategy should be directly influenced by validated learning through the use of your product.

I really like this concept and will keep this in mind when I next get the opportunity. I will definitely be posting about it once I have some personal evidence of this in practice.

Please comment if you have used Validated Learning in practise and how you have found it in the comments below.

SEO is more than just tags on a page

In short:

  • Have a strategy. Understand which keywords you want to target and why.
  • Benchmark yourself. Have a clear understanding of how you rank now on the keywords you are targeting yourself against.
  • Decide you how you want to roll out your changes – all at once or one at a time. Sometimes if you do it all at once you don’t necessarily know what worked/what didn’t.
  • Sit and wait. Sometimes it can take a couple of months to see improvements in rankings.
  • Make Google Webmaster Tools your friend and if you use Google Analytics, get the two linked together as this is quite a formidable pairing.

I wrote an email to an Accenture colleague with some thoughts on SEO. She had forwarded on an email from other Accenture colleagues that had loads of important information on the more technical side and how to implement good SEO. But there was nothing there about how to find the correct keywords to target, benchmarking where you are now and accurately measuring where you want to get to.

While this is in no way a solution to or finalised approach, it will hopefully give you enough of an idea of a good approach to setting an SEO strategy with some useful tools that you can use as well.

This was my email:

The main thing to remember with SEO is that its not just a technical solution – having the correct tags and well-structured html is only a part of it. You need well thought out copy and have many high quality sites linking to you with relevant keyword rich anchor text.

The first thing that I would do would be to come up with a strategy. What is it that we are trying to achieve? Which keywords do we want to improve on? What does success look like? Who of my competitors are doing well? Where are the gaps that I can take advantage of?

Some tools that I would recommend would be industry standard ones such as Hitwise, Google Trends and ComScore. Hitwise offers you the ability to look at which keywords drive traffic to yourself and to your competitors. It can also compare you against an industry, individual sites or a group of sites that you define yourself.  This helps you to identify where the opportunities might be that you can capitalise on.

Google Trends is a free tool that gives you an idea of how often nominated keywords are searched for in different territories. It doesn’t give you actual numbers but does give you an indication.

Then there’s the tech side of things. Reviewing your HTML, looking at microformats and other more granular html tags that Google and some other search engines still support. In addition, site speed is also quite often taken into account so using Firefox plugins like yslow and other online tools to help speed up the rendering of pages in a browser is also important. In addition to this looking at how your site is cached and trying our different pre-warming techniques can also help with this.

Another great tool in the SEO arsenal is landing pages. If you are trying to compete on specific keywords, create bespoke landing pages for these keywords with relevant keyword rich copy and with links through to relevant product – again with keyword rich anchor text.

A Practical example of Feature Driven Development

Feature Driven Development (FDD) is often theorised about on many web sites with blog posts, articles and essays being published on a regular basis and this blog post will give you a much needed practical example of it in use.

One article that is worth pointing out is DZone’s Introduction to Feature Driven Development. This is part one of a two part article describing a theoretical project and a theoretical team and the first three of five steps to achieving Feature Driven Development. It is extremely well written and gives you some really good insight into what is needed.

In my experience, I find that when you take these theories and methodologies and apply them to real life situations and projects, that they need adapting and shaping to fit with what you are trying to deliver.

An example of this is when I was leading the Product Development across five different web sites and one development team. The way in which I had implemented the Kanban methodology was different for each site due to different stakeholders and different commercial strategies needing to be delivered.

Anyway, back to a practical example of Feature Driven Development.

The example that I am using is the build of Mousebreaker, a casual gaming site that utilised a mixture of Kanban and Feature Driven Development to quickly and effectively deliver a new web site with a new code base in 28 days.

Traditionally, my approach had always been to gather all requirements, build the infrastructure, then the code, and finally the front end for a web site.  This information gathering and the writing of functional and technical specs can take a long time to complete. Then, when the development begins, the whole spec needed to be delivered before the site could launch. By which time 6 months has passed and requirements may well have changed and what is delivered is not necessarily what the business or the market needs.

Feature Driven Development tries to get around this by defining the requirements as features, then the business owners and development teams prioritise these features into a backlog of work and then the developers deliver these features in the order that offers the most business value.

One thing to note is that there is some pre-work that needs to happen before development can start. The general technical approach needs to be agreed; technologies need to be discussed, terminology needs to be agreed and basic development, testing and live environments need to be created.

In addition, certain standards would have been discussed such as coding, SEO and accessibility standards and any automated testing. In addition, any front/back end frame works that will be used as well should also be discussed.

If you look at the Mousebreaker site, you will see that the primary user function is for the user to play flash games. So the first feature that was worked on was that the user needs to be able to play a game on a web page.

At first, the developers approach was to start building a database infrastructure that could be used for the whole site. They were also wanting designs for pages etc. You need to be careful here as that is not what was required by the feature. All the feature required was for the user to be able to play a game. Nothing else.

So to deliver this feature, all that was required was a static html page with some embed code that would allow a user to play a game. The game needed to be in a web facing folder.

Once complete and tested, the feature can then be released.

The next story was that a user needs to be able to play all games on a web page.

This is where the database gets created and the initial html page is turned into a template. Again, the developers only needed to create a database that delivers the feature’s requirement.

In the meantime, while the initial features were being delivered, the designers were working with the development and business teams to deliver the designs for the site. There was a further feature for the site to have a premium look and feel that eventually would need to be delivered which could be applied to the site around the templates that were being delivered.

This felt a little back to front, but you need to remember that we were delivering features in the order of business priority.

As the features kept being delivered, the site quickly started to take shape. Throughout the development, the business representatives were always attending the stand ups and were constantly making decisions on scope of work and what would be required for launch.

We found that the close collaboration between the business and the development team was the most effective way of managing scope and ensuring that what the dev team delivered is what was expected.

I have applied this form of Feature Driven Development many times and I find that it really works. You do need buy in and effort from the business owners, and you do need to make sure that the developers do only focus on what is required to deliver the features rather than architecting a full solution before understanding all the requirements.

This allows more of a front to back development process. Where the features take priority over the implementation. One thing that I would like to point out is that there would be occasions where future considerations are sometimes ignored or put to one side to get functionality out and that this may result in refactoring work further down the line.

The thing to remember is that you will have already delivered the highest business value functionality required at that point and that the business will understand that any refactoring work should also have a value and then a discussion can be had about options and whether this work needs to be done or not. If it does and it takes a longer period of time, then allowances should be made for this.

Mobile Apps vs Mobile Sites

I was browsing the ThoseInMedia group on LinkedIn and came across this question: Mobile Sites Vs. Mobile Apps: Which Is Best For Your Business?

It made me think about what had worked and not worked in previous roles and projects and like most questions like this, its all about your digital strategy and how you monetise your content.

The main driver for me to make this decision would be to look at my business model and look at what would be the best to deliver this. Then the secondary benefits would slot in, such as deep linking, SEO, marketing etc.

Apps can be expensive to get right. I tend to look to apps if I have a sponsor that can help to fund the development of the app. In addition, it is extremely hard to make profitable apps so having a sponsor on board helps to make the app free to users and to help increase the exposure of the app.

If you do not have a sponsor and you are trying to recoup your investment through charging for your app, remember that it takes a lot of 69p/99c purchases to recoup a five figure investment. According to this report, 50% of games make less than $3k in Apple’s App Store.

If you goal is to try to increase incremental revenue – ad revenue, sales, etc – then I would go for a mobile site. You benefit from deep linking, SEO, lower costs to develop, higher accessibility to more users etc.

If you do build an app, one piece of advice would be to try to deliver something that a web or mobile site cannot do and also try to utilise the phone’s features – location, camera, etc – as this can help to really set you apart from other apps out there.

In addition, with the ever improving HTML5 standards, you can achieve many app like features through a smartphone site. There are also plenty of frameworks out there that allow you to take advantage of the device’s capabilities. Such frameworks include PhoneGap that allow web developers to deliver app like experiences through a browser. It also ensures compatibility between the main devices such as iPhone, Android and Blackberry.