With any billion dollar website or brand, launching an E-commerce website takes a significant amount of planning, really thorough and detailed coordination between teams, testing validation, and a comprehensive roll-out schedule.
InfoTrust has had the opportunity to help numerous companies plan, test, and roll-out large E-commerce websites. Our experience tackling these projects has given us valuable insight into what works and what does not.
Below are some lessons learned from implementing Google Analytics for mega E-commerce websites.
The planning process for a mega E-commerce website is significant, large, and requires a lot of coordination. When beginning the planning phase, you must not only coordinate the discussions and planning on setting up an enterprise analytics architecture with the business owners but also with:
It’s important to get full visibility into the organization and understand the attributes and information that needs to be tracked.
At InfoTrust, we conduct interviews with key stakeholders, business leads across different business units, various markets across the world, and more. We compile all of this information into documentation and then we prioritize it with whoever is the project champion.
Working with organizations as large as we do, the project champion is typically a central team at HQ or a global management center that runs all analytics or operations across global platforms. They prioritize the data to be collected and set the project timeline.
An implementation project can take anywhere from a couple of months to a year and a half, depending on several factors: How many markets, how many sites, how many platforms, and how many channels are involved.
Organizations that sell to consumers as well as to businesses, across websites and other platforms, will need to prioritize and create phases for the implementation. Typically, key actions and KPIs are part of the initial phase.
While still in the planning phase, we build the implementation specs, so we can develop the key conversion metrics. This process amounts to working backward in the consumer journey or checkout flow, first tracking macro-conversions: Completed purchases, completed signups, and key events that you use to analyze your user engagement or marketing against conversions and KPIs as a business.
We then look at general shopping behavior, which are user interactions such as product impressions, promotions and discounts on the site. These are the secondary conversions, or what we call micro-conversions (actions leading up to the main, macro-conversions).
From there, external campaign/media tracking setup and planning is addressed. This includes UTM parameters for Google Analytics, or a similar process to track channels and advertising, and how people actually arrive on the site or in the app. This can also include integrations between media channels and your analytics platform.
Many people think the implementation is the most difficult phase — with a lot of technical challenges and loads of backlogs — but honestly, it’s one of the easiest due to its straightforward nature.
At this step, it is very important to have a good partner or a strong understanding of the technologies and the deployment architecture. This will help ensure everything is uniform and standardized across markets, which allows for the use of global roll-up reporting. Then you can deploy it in a very uniform and sustainable way.
We have created several resources to help you learn more about how we use Enhanced Ecommerce reports in Google Analytics for implementation and management of large E-commerce websites:
We typically recommend getting the key conversions and KPIs tracked first before all other non-essential or supplementary actions are configured. For an E-commerce business, it would start with the completed purchase and then move backward through the checkout funnel and finally through other user interactions or user shopping behavior.
However, if you have multiple platforms and multiple tools, we recommend implementation of the analytics across all platforms at the same time for the type of tracking you’re doing (one KPI or macro-conversion for all platforms at the same time). For example, start with ensuring the highest quality conversions or purchases and account registration. Actions are tracked on your mobile apps and your web platforms at the same time; in this way you can immediately compare platforms and markets instead of analyzing each separately, based on what has or has not been deployed.
Because of the roll-up capabilities that can be implemented with any analytics platform, particularly with Google Analytics 360, deploying across digital properties allows you to test, manage and do some reconciliation with the backend order or tracking system across all of your platforms.
Consider the following scenario: You break up your analytics implementation first to target websites within a certain group of markets and then implement your mobile apps conversion and order tracking a few months later. In this case, you could analyze and assess how those are working or the reconciliation or discrepancies against the backend for just web orders. However, you will not have a complete picture until the mobile apps are deployed later.
From a deployment and resource perspective, breaking up the implementation across platforms makes sense if you have certain development/release cycles or your team is segregated or segmented that way, resulting in an implementation that cannot be completed at the same time. You might have app developers and web developers with different priorities and speed to completion, but we still recommend launching all of the same metrics together. So, for example, implement all the order tracking together across all your platforms. Then tackle the checkout funnel across all of your platforms.
We have learned from working with multiple markets, brands, and website/app versions that some things might be localized for different languages or countries. Therefore, there might be slight variations to the checkout funnel. In this case, we recommend implementing a two-step checkout funnel analytics approach. This means having a uniform, global checkout funnel which has only two steps:
This two step, simplified approach will show if there are drop-offs between entering the checkout at any step and actually finishing a purchase. At the local level, you can have separate dimensions or attributes that allow you to build a custom funnel or custom user journeys based on the localized flow.
In fact, we did just this for one of our clients! First, we set up the standard Google Analytics checkout funnel, which has two steps.
This allowed us to create custom funnel reports and visualizations in either Google Analytics or Google Data Studio – or any other visualization platform – to show per market, per brand, or per country. We could even show per language setting or per platform, while still maintaining the global roll-up view.
Another key you may need to consider is tracking attributes such as product name across a multi-market, multi-brand organization. You might have some products sold in multiple countries and multiple currencies. If you are selling that product in Germany, it might have a German name on the website while having an English name on the US website.
When you are tracking the data together in that global roll-up to get cross-brand, cross-department analysis, it will look like two separate products because they are listed in two different languages. The lesson learned here is this: For every single product, track both the local product name or local product attributes as well as the international or global roll-up values. This typically would be English, since that’s the most common language globally. From an HQ or headquarters perspective, getting one common set of very uniform nomenclature is ideal.
Making sure your backend systems are uniform is also very, very important. If local markets, local teams, and local brands can have the flexibility to change how different products or pages are displayed on the website or app for their local market, teams or region then the roll-up perspective is going to be very difficult.
This is another instance where you would keep the local tracking and local dimensions and attributes, but also have a second layer track that is on the global scale: One that is uniform with the same language and structure that allows for cross-region analysis on a single report.
It is extremely important to allow plenty of time for testing. If it might take a week or two to deploy one feature or one tracking element on a certain action, we suggest giving at least half of that time or more to test, validate and confirm. Not just in staging on the site or in the app directly (with testing tools like GA debugger. Google Tag Assistant, or Charles Proxy), but also in Google Analytics. And not just real-time reports or real-time data tracking, but in the reports that ultimately get populated.
Testing across all areas is the best way to ensure that once it goes live, you’ll have clean data. Of course, it’s also worth noting that once you take things live to production, sometimes things come up that you did not anticipate.
Some attributes suddenly appear. The values are not what you expected because local markets modified them for their local sites or their regional digital platforms. Sometimes things break when they go live versus being very stable during staging.
This is why it’s worth having some buffer when you deploy things. Leave time to also review in production and not immediately move on to the next thing.
Also allow for fixes to be made after you go live, especially for big releases with a lot of new tracking and new elements being deployed. In our experience, we usually test in phases, deploy in phases and validate in phases, but we also have an ongoing data validation tool because it’s just not feasible to have an army of QA testers constantly testing pre-existing implementations.
Imagine that you deploy e-commerce order tracking early in the year and then later do checkout funnel tracking and later still do more shopping behavior, such as add to cart and product detail views. At the middle to end of the year, you do not want to manually test the e-commerce purchase tracking that you set up early in the year.
Setting up an automated testing platform or testing tool to confirm, no matter what releases or what changes have happened, that everything is working properly means you can be confident the data quality persists and all the attributes are tracked when an order is completed.
We have our own tool, Tag Inspector Realtime, that monitors tracking and alerts us immediately if something breaks or any attributes start missing, all the way down to individual values or on individual actions.
For example, on an add to cart click, if we suddenly see product ID or product name missing then we get an alert. Or if on the thank you page, we see transaction ID is missing or the revenue is corrupted due to a recent change that happened on the site through a release, we’ll get alerted immediately.
What we have learned is that before you deploy any changes to production, you must make sure you have alerting tools properly in place. Google Analytics has some built-in alerting functionality, but we recommend also having a separate tool for testing and validation.
Once you start deploying data, it’s good to let it accumulate for a few months before you actually expose it to people. It is important to keep user permissions at bay until you have enough historical data accumulated and you communicate clearly the date to begin analyzing from. If you give people access too early and they start doing time range comparisons without knowing the date to begin analyzing, they’ll see tremendous growth. Why? Because there is no data in earlier time periods!
If you’re launching your systems January 1st or you plan to give people access on January 1st, the goal should be to have at least three months’ data or more available for analysis. Our recommendation for data collection is a full year in advance so that people can do quarter over quarter or month over month analysis.
It’s difficult, we understand, to do year over year analysis when you’re switching systems, implementing a new system, or cleaning up analytics tracking. That said, giving it enough time will be better for long-term analysis and recommendations.
Now for the last component: Roll-out, making sure your e-commerce tracking is shared across the organization, and that the data is actually being activated. This comes down to process, communication and really providing your organization the resources to take advantage of all the analytics configuration that you worked so hard to implement.
What we’ve done in the past is a three-month training series with multiple training sessions per week for different zones, markets or teams. These training sessions are tailored, recorded, documented and then uploaded to an internal portal or education hub for people to access when needed. The lesson learned here is to make sure you have multiple times for people to join the same training sessions.
Even if you record the training sessions, people are still more comfortable joining a live training so they can ask questions. Watching recordings is not ideal, but it is still good to have as a backup, and recorded training is helpful for new candidates while working with a training partner. Consider additional training throughout the year, or create a monthly newsletter with content sharing quick tips and tricks or how to use different reports. The main thing is to communicate that training is accessible.
When you use Google Analytics, for example, the data is extremely accessible. It’s good to encourage people to play, but also give them direct links to what they should be using. Giving access to direct reports and insights will be enough for markets and teams to get started and start making those analyses that they need to improve their digital marketing efforts.
Want to learn more about Google Analytics for large eCommerce sites? Contact your InfoTrust Consultant today!