Is Responsive Enough?

I originally posted this on my LinkedIn profile, but felt that it was a good one to make available on my blog as well.

Responsive web design, if done correctly, can be a great way to ensure that your users get a consistent experience across all devices. The idea is that you have one code base (html and css) that covers different screen-sizes and renders the same content differently for these different screen-sizes. When a change needs to be made, it only needs to be made once – but still needs to be tested across multiple platforms.

Before responsive became more universally used, you would have separate code bases for each rendering of your web site – so for desktop, mobile and tablet, you would have three different code bases. This means that when a change needs to be made, or you add a new content type etc, your developers would need to make changed in three different places and still be tested across multiple platforms.

On the face of it, responsive solves a lot of problems. It ensures that only one code base needs to be maintained and that users get a consistent experience across all devices. All this sounds great, but I can’t help but think that we are missing something – motivation and context.

Motivation and Context

Users access the web across different devices for different reasons at different times of day and by using different methods of discovery. Does one design/experience suit all of our users?

With a responsive site (in most cases), the navigational architecture of the site is consistent, the homepage hierarchies are consistent, the related articles are consistent, the most popular articles are consistent etc.

I find that both from my own experiences and from the data that I am seeing from my position as Analytics Director is that users want different content depending on which platform they are accessing a site on.

When I am on a desktop device – whether its at home or at work – I tend to use Google more to find things that I want to know about. I’ll search for something specific and will only want to really read about that one thing. It will lead me to bounce and not be very valuable to the web site that I am visiting.

Don’t get me wrong, there are some sites that I am very loyal (mainly centered around social, news and sports) that I am very loyal to, but outside of that I find that I am a serial bouncer when it comes to the Desktop. I need one hit information. I am not in browsing mode here.

My mobile phone browsing is driven a lot more my social media. Again, I have my loyal sites that I will frequent directly regardless of device, but overall, my mobile web consumption is driven by what I see on Facebook, Twitter and LinkedIn. I am more likely to be accessing the web during down time – traveling, breaks from work, boredom – so I am much more likely to browse and enjoy my experience.

My tablet is more used as a second screen. I don’t tend to be using it as a second screen based on what I am watching, but more as a way to keep me entertained when my wife is watching Secret Eaters, Celebrity Big Brother, Come Dine With Me etc. So my behaviour here is a hybrid between Desktop and Mobile – which I suppose is one of the main points of having a tablet!

The data that I have access to is showing me the same thing. I find that Google dominates acquisition for Desktop, social for mobile and an equal mix for tablet.

This leads me back to my question – is responsive enough?

I could not agree more that having a single code base makes the most sense. I also agree that content should render correctly across all devices. But what I do not agree with is that one experience suits all.

What happens when users on desktop are more interested and only access content that ranks highly on SEO? So news stories, one shot pieces of information that are important at that moment in time.

And on mobile, what happens when that gallery is proving popular or a top 10 list drives traffic. This is very different to the desktop experience.

Likewise with Tablet being a hybrid of the two.

So what do I suggest?

I am not and have not come up with a new concept for web site design. What I am suggesting however is that we take a step back and see if there is some form of hybrid – where we can take responsive design but make it adaptive to the context of the user.

Ensure that we can change the experience to better reflect what a web site is offering to a specific individual based on behaviour exhibited by others and their context.

To assume that those users that visit your site across multiple devices is arrogant – its something to strive for, but in my experience it is just not the case.

I think that different devices should be designed for differently based on behaviour and that we need to be more fluid in site hierarchies and functionality that best suites the devices that the content is being consumed on.

I would really like to hear what other people’s experiences are and whether you feel that responsive is enough – or if you think its missing something and if you can put your finger on what that is!

Handy Regex Examples for Google Analytics

Regex, or Regular Expressions, can be a seriously powerful tool when creating custom segments, reports and filters. They allow you to go beyond the already powerful standard click and select functionality that comes as standard with Google Analytics.

There are plenty of resources available out there to help with this, but they can get quite technical. I hope to bring together the regex patterns that I find the most useful, and hopefully you will to.

If you have any use cases that you are having trouble with, please add them to the comments below and I will help you to find the best way to use regex to solve them.

I will be keeping this blog post up to date so please do bookmark and check back every once in a while.

Useful Regex

The Wildcard

If you want to see traffic to a specific section of your site using your url structure, then this regex is the one for you. This technique can be applied to anything where you want to match a word or phrase within a string (in this case a keyword in a url).

  • In your filter ensure that you select Page for the URL, Regex for the condition and (.*)2013(.*)

In this example, the filter will return any page that has “2013” in the url. You could use this for anything where you are trying to match a specific work within a string of letters and numbers.

The regex (.*) is essentially a wildcard that, when placed before and/or after a keyword allows you to say to Google Analytics, please give me everything where this word appears.

The OR Regex

There is a regex condition that allows you to find multiple keywords in the page title. Let’s say that you wanted to find all content that relates to Christmas. You could create a filter on a Page Title report that only pulls in data based on a certain set of words.

You do this by using a PIPE:

  • In your filter, ensure that you select Page Title, Regex as the condition and then your keywords – which should look like:
    • christmas|santa|noel|mince pie|holly|misteltoe

The filter will return any pages where the page title contains the keywords listed above. Remember not to add a PIPE at the end of your keyword list as this will result in returning all pages.

The give me something specific Regex

Let’s say that you wanted to filter only traffic from Facebook and Twitter in your All Traffic Report in Google Analytics. To do this, you need to use Regex.

Go to the All Traffic report and select Source as your dimension rather than Source/Medium, then click on Advanced Filter. In the filter, you need to add the following to ensure that you only include traffic from Facebook and Twitter:

  • facebook|twitter|^t.co$

By using the PIPE, you are ensuring that any of the keywords you added are included. The ^t.co$ ensures that you are only including traffic from Twitter’s redirecting url, t.co.

If you hadn’t encased the keyword with the ^ and $ symbols, you would have seen sources that contained t.co – for example pinterest.com and other domains.

  • ^ means starting here
  • $ means ending here

Marissa Mayer – The before and after becoming a CEO

Marissa Mayer seems to be everywhere at the moment. Whenever I am on Facebook, Twitter and LinkedIn, I’m seeing news in all my feeds at the moment. Obviously since taking the CEO and President of Yahoo roles her profile while already large when at Google, seems to have increased exponentially.

One of the things that I realised was that I had never actually seen a video of her speaking. All I ever really saw was a similar shot taken of her smiling when on a forum or during a presentation – much like the image above. So I thought I would search on YouTube for some videos and I stumbled across this relatively short video interview of Marissa Mayer with Charlie Rose for the IAB.

http://www.youtube.com/watch?v=6zP1p8ZTCio

Mayer on The Importance of Listening

In the video, Marissa Mayer discusses what its like to be a CEO and her early successes. One of the main things that she said that resonated with me was this:

I think the most important thing I have learned about being a CEO is the importance of listening. People talk about what changes as you grow in your career. Someone pointed out to me which I think is really true is that early in your career you get ahead by following the rules perfectly. Like getting things done on time, like following the rules and meeting expectations.

Mayer on Breaking the Rules

Another pertinent point that Marissa Mayer went on to say was that its important to understand when to break the rules and when the rules do not apply.

When you become an executive it becomes knowing when to break the rules or knowing when the rules do not apply.

I must say that I have been impressed with Marissa Mayer and her leadership of Yahoo. They have seen traffic increase, been building new products and innovating.

I believe that to grow in the digital space, you need to be doing all of those things. Without building new products, innovating and focus, you will be left behind. Big businesses are competing against small start ups who are amassing large groups of users without worrying about how to monetise this audience until much later.

Too often I see big companies shooting great ideas down because of lack of investment or lack of resource. While I understand how traditional businesses work, this is not how the company with grow. They need to take more risks. They need to start pushing the boundaries.

When faced with a new platform, they tend to just try to repurpose their existing products on to it, rather than taking advantage of the technology and trying something new.

I’ll be keeping a close eye on Marissa Mayer and Yahoo as I think that they are on course to grow further and surprise a lot of people on the way.

Leadership – What makes a great leader?

Leadership is something that I am really passionate about – and reading articles to help me to be a better leader is hugely important and I just read an article on LinkedIn entitled Great Leadership Starts and Ends with This which really resonated with me.

The Best Leadership Advice EverI was struggling to engage the audience. Okay, forget struggling — I was dying onstage. Maybe I was having an off day. Maybe they were having an off day.

Embedly Powered

In short, Jeff Haden (the article’s writer) was talking in front of a number of CEOs and senior managers and out of the blue asked them what makes a great leader.

Not expecting any response from them, one of the CEOs chirped up with No one cares how much you know until they first know how much you care about them.

He went on to explain that you can convey to employees what the company strategy, vision and goals are and they might care for a short period of time, but they will then go back to what they normally do. It doesn’t resonate with them and doesn’t necessarily change their behaviours.

Only when a leader demonstrates their passion and how much they care about their employees will it really make a difference. He went on to say:

“We can try to communicate and engage and connect all we want but no one really listens. They just smile and nod and go back to doing their jobs the way they always do.

“Our employees don’t really care about what we want them to do until they know how much we care about them. When employees know — truly know — that you care about them, then they will care about you. And when they know you care, then they will listen to you… and then they will do anything for you.”

This really resonated with me. I have had a few different managers in my time and by far the best one was the one who cared about me and my career. Others who have either just barked orders or dictated what needed to be done and how didn’t inspire me or motivate me.

A great leader trusts their employees

It was under this manager who cared that I really progressed through the company. I completely bought into his strategy, which in turn allowed my team to buy into it as well. I learned that you need to nurture a good working environment and build up a level of trust to really engage with your employees and for them to engage with the business.

This really is the only other point I would make on this. Not only does a great leader care about their employees, they also trust them. Great employees are able to make decisions, try new things, feel comfortable to fail and learn from their mistakes.

My recipe: Great leaders create great employees which in turn makes these leaders even better – repeat.

I know that there is a lot more to it than this, but I felt that this article was worth highlighting as it gave a slightly different, tangible and achievable example of great leadership.

App referral traffic to your web site and how to track it

Measuring referrals from mobile and tablet apps to web sites is extremely difficult – actually, its practically impossible. Over the past few years you will have seen traffic from direct/typed/bookmarked sources increase steadily as app usage has increased. Unfortunately this is not because your web site has become a destination for your chosen content, but instead its because your analytics platforms are unable to attribute traffic to apps.

I am specifically finding traffic from social media apps, so Facebook and Twitter, the hardest to track down. There are techniques that I discuss below that can help you to track content that you post yourself, but unfortunately this doesn’t help with organic sharing.

While Facebook Insights for domains does give you some level of overall referrer information, it does not breakdown the traffic between desktop and mobile.

This post explains why your analytics package is currently unable to track this traffic and tries to find some solutions to help you to make sense of it all.

So how do analytics platforms track referrals?

Most analytics packages use header information contained in your user’s web browser to determine which site the user had been to previously to visiting your web site. This header information only appears if a user clicks on a link to your site. For example, if I am on web site A, click on a link to web site B, then the header information would show that the referrer was web site A.

Just so we are clear, if I am on web site C, then type web site D’s domain name into my web browser, then there would be no referrer and the traffic source would be Direct.

No referrer in the header

This latter example explains why measuring app traffic is extremely difficult; an app is not a web site, it is not viewed in a web browser and they do not contain header information when linking out to a web site.

There are two different techniques that apps can use to open up web content:

  1. Open the content in a native mobile or tablet browser – such as Safari, Chrome, Android Browser, Opera etc
  2. Open the content in an in-app version of the native browser

In both cases, because the app is opening up a new instance of a web browser, whether its the native browser app or the in-app version of the browser app, there is no referrer in the headers. So when your analytics package is seeing the user visiting your web site, it is seeing that there is no referral and  it will deem the traffic to have no source and therefore assign that referral as direct.

So how can we track app referral traffic?

There are a couple of things that we can do as content creators to get around this:

Google’s UTM Tracking: https://support.google.com/analytics/answer/1033867?hl=en

If you use Google Analytics, you can use the built in UTM tracker to track links that you post to other web sites or platforms. Quite a few URL shortening services and sharing plaforms such as Buffer, Bit.ly and Owl.ly allow you to add these dynamically so you don’t need to keep adding the tracking manually.

When a user visits your site using a UTM tracked link, this will override what is contained in the web browser’s header and attribute the source of the traffic to relevant keyword that you add to the tracking. You need to be very careful here as I have seen instances where these have been setup incorrectly and links posted to Facebook with Twitter as the source have been shared which makes the data inconsistent and unreliable.

One thing to note on this, is to only use this method for external, inbound site links. You should not use these links on internal links as they automatically start a new site visit – meaning that you will see a spike in visits whenever someone clicks on your tracked links.

Using a similar service for other analytics platforms:

Many other analytics platforms have a similar method for campaign tracking. While these have been primarily used for more traditional marketing campaign tracking, there is no reason why you cannot do use this same method for the sharing of content.

Limitations

This would only work for content that you share yourself. There is no way to enforce this for any organic shares – so when a user copies and pastes a url from your web sites into a social status update for example.

Also, as you cannot post content to a specific device or platform, you cannot differentiate easily between a social media’s desktop, mobile web or mobile app experience. So while you may get closer to tracking Facebook traffic, you will still not know the make up of desktop, mobile web or mobile app referrals.

 

Mobile Native Browsers vs In-App Browsers

I was surprised to find that Adobe Analytics (Adobe SiteCatalyst or Omniture, depending on how far back you go) doesn’t allow you to report on mobile browsers. Even though, I’ve been using Adobe SiteCatalyst for many years and I had always assumed that the ability was there. I am actually a little ashamed that I hadn’t noticed this before.

SiteCatalyst Device Type with Browser Breakdown
SiteCatalyst Device Type with Browser Breakdown

The main reason for me to look at this was that a colleague sent me a link about Embedded Mobile Browser Usage on http://www.lukew.com. While the blog post is more to do with how developers should be testing on in-app versions of mobile browsers as well as native browsers. This is mainly due to the in-app versions of Safari and the native Android browsers having less functionality and lower rendering powers. Luke explains that the in-app version of Safari for example does not include the Nero Javascript engine, certain Safari caches, or the “asynchronous” rendering mode. Meaning that web sites could appear to load more slowly, contain errors etc that would normally not be detected if testing on the native versions.

As Facebook and Twitter app usage continues to grow, its going to become more and more important to understand the differences between these different browser types and how users browse your sites when using these apps. In addition, when users click on your content from apps such as Facebook and Twitter, they appear as either Direct or Typed/Bookmarked depending on which analytics package you use and whether you are using any form of campaign tracking or not.

When I realised that Adobe Analytics does not track mobile device browsers, I turned to our implementation of Google Analytics. Fortunately, GA does track in-app browsers on iOS – so you can see data for Safari and Safari (in-app).

What I have found in my initial dive into this is that users browsing our sites using an in-app verison of the native browser consume less content (in terms of pages per visit), has a higher bounce rate and are on the site for about half the time than the native version.

In-app browsers vs native mobile browsers
In-app browsers vs native mobile browsers

What is going to be hard to work out is whether this is due to a behavioural or technical motivation.

If the site renders slowly, contains certain errors, or is not fully optimsied for the in-app experience, you could attribute some of the reduction of usage to this rationale.

In terms of behavioural, you need to consider what users are doing. If I am on my Facebook app on my phone, its probably while I am in the middle of doing something else. Potentially on the toilet, in a meeting (hopefully my boss is not reading this) or in transit therefore I have limited time to view.

Also, to move this point further, if I am in my Facebook app, I’m also interested in seeing what other content my friends have posted. So I might just read the article that a friend posted, then click back in the app so I go back to my feed and continue my Facebook journey.

One of the main things that I tell colleagues at my job is that Analytics is great at telling you what happened. Trying to find the reasons behind a person’s behaviour is not generally achievable. You can find commonalities between certain types of users, segment them and try to implement changes to convert users to behave in the same way. But in terms of understanding the drivers for users to act in a particlular way is not something that I believe Analytics will answer.

You need to be talking to your users and understanding why they are doing what they are doing – whether it be done through social media, email or surveys. You really need to have all the information to make decisions and I believe that Analytics is only one part of it in this case.

How Google’s secure search is hurting analytics

It has been happening since 2011 but ever since Google introduced the ability to use their search engine over a secure (SSL/https) connection, the lack of visibility of search keywords has been steadily increasing culminating with a predicted 70% of search keywords not being reported in September 2013, according to this blog post about Google Query Data Disappearing at an Alarming Rate on the RKG Blog.

This has increased from 43% only as far back as July 2013. So why is this happening and what impact is it having on analytics?

Google has been moving users to the SSL version of their secure search result pages to ensure the privacy of their users and ensuring that prying eyes cannot listen in on what users are searching for. They seem to have stepped up this tactic ever since the public has been made aware of various hacks and leaks to WikiLeaks.

The impact this is having on anlytics is that when a user searches for something over a secure connection, the keyword(s) that the user searched for is removed from the referral string. Meaning that a referral is detected from a search engine, but as the keyword has been removed would be logged as Not Available (or equivalent for the relevant analytics package).

As mentioned previously, it is assumed that up to 70% of search terms will be affected by this by the end of September 2013. The huge recent increase of these secure searches has come from Internet Explorer who from IE7 onwards now only supports the secure verison of Google’s search results.

Suddenly trying to understand the impact of any SEO improvements you are making to your site will not be able to be tracked effectively using your standard analytics packages.

One potential way of getting insight for SEO purposes could be to create an Advanced Segment that sets the referrer type to Search Engines and where the keyword is unavailable. Once you apply this segment, you can then look at top landing pages and understand the performance of your top landing pages. You could start to trend this over time and validate any changes this way.

This issue affects all analytics packages, including Google Analytics.

Suddenly, SEO just got a lot harder for everyone. Ironic really as Google is trying to ensure that they eliminate what they term as ‘Search Spam’ from their search results pages – see this video from Matt Cutts – but they are not allowing website owners to see how these changes affect them and how to best optimise their sites through data.

It feels like they are giving with one hand and taking with the other.

Prioritising a backlog doesn’t need to be rocket science!

The common component in most software development agile practises is the backlog – a list of user stories, often written on cards, that represent a feature or a piece of work that needs to be completed by a member of the development team.

This backlog of work needs to be prioritised – and I have seen different people do this in different ways.

I’ve seen these items prioritised individually in excel by a single individual, where many different factors have been scored and a calculation applied to give a final score where weightings have been applied. This took a lot of time and a lot of toing and froing – all the while the development team were working on lower value items. Made no sense to me.

This list is then ordered largest value to smallest, resulting in one person’s opinion on what his list should be. This did not take into account any development team member’s experience, as well as any other teams that may be impacted by the work.

Any new items would need to be scored in this fashion, debated extensively and then slotted in to the backlog.

As far as I am concerned, this is not the way to prioritise work – especially in an agile environment.

A big part of agile is collaboration, responding to change and having working software at the end of each release. Why not apply these five principles to prioritisation.

Get everyone together

If someone is involved or impacted by the functionality being discussed, get them involved in the prioritisation – within reason obviously. At least have each department represented and ensure that there is one mediator and someone who can make a final decision if there is a disagreement.

Understand the goals

Set out at the start what the strategy is for your product and set out the objectives. Each user story or card needs to constantly compared to these objectives and ranked according to how far it goes to achieve these goals.

Relative priority

Keep things simple. Select a starting point buy placing a card in the middle of a table. Discuss this card and ensure that all involved understands exactly what this card represents and how it helps to achieve your product’s strategy and objectives.

Select the next card, discuss what it is and explore it’s benefits, risks, amount of dev work etc and then place it above the card if it is more important or below this card if it is less important.

Keep doing this with each card until you have a fully prioritised backlog. If its a big backlog, then you may need to split this into a couple of sessions.

Be flexible

Through discussion, you may find that items are no longer required or require more definition or to be defined. You can do this in the meeting itself or separately.

Future proof

Once finished, when new items come up, they can be easily prioritised as all members of the team understands the backlog and how it is made up. You place the new feature in the backlog relative to everything else.

In my experience, this is by far the best way to prioritise work. Having items prioritised relatively compared to other work get’s the business to understand that they cannot have everything all at once and that a decision has to be made as to which is more important than the other. You do not have a spreadsheet that gets ignored with multiple items with the same value.

Items can be redefined as you go and the team as a whole has been included and feel valued. The whole team has a voice. This leads to highly motivated team who have bought into the product and what is trying to be achieved.

YouView opens up it’s trial to the public

YouView, a joint venture between BBC, ITV, C4, C5, BT and TalkTalk, has today opened up its preliminary trial to the public. If you are interested, you can register via this special Trials web site: https://trials.youview.com/login

Here is some more information from the YouView site to better explain what YouView is:

A little while ago, some of the UK’s best-known Broadcasters and Broadband Internet companies got together to talk about TV and how they could make it better.

They agreed that while everyone loves great TV, it’s frustrating when you miss a great show – and it’s never quite the same on a computer. Wouldn’t it be great if you could combine the simplicity and value of Freeview with the choice and convenience of catch-up and on-demand services – all on your own TV?

So they decided to create YouView to do just that – all together in the same set top box with no fuss, no computer and, best of all, no TV contract.

You get all your favourite Freeview or Freesat channels, plus the last seven days’ catch-up TV, and for those who want even more, the choice of on demand and pay TV – like films, sports and US drama.

Our unique programme guide will go back in time, so you can watch shows from last night or last week today. You’ll also be able to search for things by title or even an actor’s name.

And that’s just the beginning.

Because YouView is open, developers can create apps that can make your TV do all sorts of things. And all kinds of providers will be able to put their content on YouView. As YouView develops there’ll be more and more content available from the Internet as well as lots of ways to personalise the way you watch.

All this with no monthly TV subscription. You’ll just need a YouView box and broadband Internet.

Brilliantly simple, and simply brilliant. YouView will be everything TV should be.

What is Validated Learning and how can it be applied to Kanban?

I’ve been reading the Lean Startup by Eric Ries and he has introduced me to the concept of Validated Learning. Validated Learning is the practise of effectively measuring the accuracy of assumptions and using the results of the Validation to understand whether the assumption was correct and if so, continue onto the next test. If not, decide whether your strategy, assumption or feature needs to be improved or to change direction.

The main thing that I found interesting was an example of how this could be integrated with Kanban, a practise that I am an avid user of since launching Mousebreaker.com in 2008.

The example spoke of the founder of Grockit, an online teaching site that enables students to learn either socially with other students online or individually. They assumed that a user story was not actually delivered until it had been confirmed as a success through measurement of Validated Learning.

In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

If it is found that a feature has not been a success and actually improved the product, then that product is removed. In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

In the example, Grockit uses A/B Testing and cohort analysis to validate the success of the feature being delivered by the development team.

A/B testing allows you to test different versions of pages or parts of pages to different proportions of your users to test a metric such as registrations or visits to sections of your site – although you can measure much more than this. You can then determine which version works best and then make the winner available for all users.

Cohort analysis is very similar but takes place a step before the A/B Testing. You may use a number of different marketing channels using a number of different messages, for example. Each user that comes from one type of channel/message would be associated with that channel/message and then tracked throughout their lifetime on your product. This would allow you to understand the cohort group’s lifetime value for example, so that you can understand the value of those messages or that channel.

You can use each of these methods exclusively or together and measure against the metrics that matter to your business. What you are looking for is actionable data, something that up you can make a decision on or understand the value of a change.

I really like this concept; however it does seem to go against the basic principles of Kanban. Kanban is an agile methodology that is used, partially, to eliminate waste during the build process of a product through practises such as limiting work in progress and kaizen – the constant improvement of the process.

My question is whether removing a feature that is not deemed a success as wastage; and if it is, could that have been avoided?

Whenever I have built a web site and we have launched a new feature, we have measured and iterated based on the data collected until it has worked – a form of validated learning. Only, of a feature has not worked, I have seldom, if ever, removed this feature. My main thought on this is that it would have been a waste of the time to develop the user story.

Perhaps I would have removed the feature if it had a detrimental effect on the metrics used to validate the functionality.

I would be happy to do this as long as it informed decision making in the future. This is what Eric Ries is arguing. A user story or feature that is being developed is actually a test of a theory or an assumption. You are assuming that this new feature will improve the product and will improve a core measurement to your business.

If it does not do this, why keep this feature live? If it does not deliver enough improvement, then why keep it live? As long as these lessons learned helps to change the strategy or assumptions around your business, you should be able to reduce wastage as your strategy should be directly influenced by validated learning through the use of your product.

I really like this concept and will keep this in mind when I next get the opportunity. I will definitely be posting about it once I have some personal evidence of this in practice.

Please comment if you have used Validated Learning in practise and how you have found it in the comments below.