Are you looking to make it easier to preview and QA Optimizely experiments? Download the OptiPilot Companion to easily see active experiments, preview events & more!

How to resolve MAU overages in Optimizely Experimentation (step-by-step)

You’ve started to experiment and after a few weeks you notice that your Usage dashboard is showing way too many Monthly Active Users (MAUs) compared to what you’re expecting.

This article will show you step-by-step what to do to diagnose the issue and resolve it. Let’s go.

What is an MAU (Monthly Active Users) and how is it counted?

In Optimizely Feature Experimentation, MAUs are counted whenever a decision or tracking event is recorded for a unique user ID, such as when the Decide method is invoked or a conversion event is tracked. Decision events from disabled flags do not count towards MAUs.

Optimizely Web Experimentation counts unique visitors who encounter the relevant page snippet, regardless of whether the snippet activates any experiments or events, facilitating accurate tracking of user interaction.

In Optimizely Full Stack (legacy), MAUs are recorded through methods like activate() for A/B testing and track() for event tracking, ensuring users are counted even if they receive disabled flags. Meanwhile,

Step 1: Make sure to enable bot filtering

Having bot filtering turned on will ensure Optimizely automatically disregards any user with a user-agent from a known bot list. This is enabled by default for Optimizely Web and Optimizely Feature Experimentation’s React & Javascript SDKs. For other SDKs, you need to set an attribute called $opt_user_agent when creating a userContext and ensure you’ve enabled bot filtering in your Feature Experimentation project settings.

Step 2: run these essential queries to understand the root cause

Count MAUs in a billing cycle

SELECT 
    COUNT(DISTINCT visitor_id) AS MAU
FROM (
    -- Select visitor_id from Decisions table within the specified date range and account
    SELECT visitor_id
    FROM decisions
    WHERE 
      timestamp BETWEEN TIMESTAMP '2024-08-29 00:00:00' AND TIMESTAMP '2024-09-29 00:00:00'

    UNION

    -- Select visitor_id from Conversions table within the specified date range and account
    SELECT visitor_id
    FROM events
    WHERE timestamp BETWEEN TIMESTAMP '2024-08-29 00:00:00' AND TIMESTAMP '2024-09-29 00:00:00'
) AS unique_visitors;

PS: don’t forget to replace the timestamp interval with your billing cycle.

This query should return exactly the MAU number you see in the Usage dashboard. This confirms that you’ve downloaded all the data and sensechecks that your data is ready to be used.

Break down MAUs by client_engine

SELECT 
    client_engine,
    COUNT(DISTINCT visitor_id) AS MAU
FROM (
    -- Select visitor_id and client_engine from Decisions table within the specified date range and account
    SELECT visitor_id, client_engine
    FROM decisions
    WHERE  timestamp BETWEEN TIMESTAMP '2024-08-29 00:00:00' AND TIMESTAMP '2024-09-29 00:00:00'

    UNION

    -- Select visitor_id and client_engine from Conversions table within the specified date range and account
    SELECT visitor_id, client_engine
    FROM events
    WHERE 
   timestamp BETWEEN TIMESTAMP '2024-08-29 00:00:00' AND TIMESTAMP '2024-09-29 00:00:00'
) AS unique_visitors
GROUP BY client_engine;

PS: don’t forget to replace the timestamp interval with your billing cycle.

This query will tell you exactly where your MAUs are coming from. It will return the Optimizely product or SDKs and how many MAUs each of them are generating. This allows you to drill down on 1 specific Optimizely product or SDK.

Then investigate each decide or trackEvent call. Are you running these methods on visitors that never see any experiments? (ie. counting MAUs for nothing?). Are you using decideAll to decide on experiments? Basically once you know where the MAU spikes are coming from, a code review is then the suggested next step to ensure you find out where these MAUs get generated from.

Note that the query above will return more MAUs than the MAUs reported in the usage dashboard. This is expected as we’re breaking down MAUs here, a user could have been exposed to 2 client_engines: this will show as 2 MAUs in this query, where as in the Optimizely dashboard it will count as 1 MAU.

Check if bot filtering is enabled

-- For Decisions table
SELECT 
    'Decisions' AS table_name, 
    COUNT(*) AS row_count
FROM decisions
CROSS JOIN UNNEST(attributes) AS t (attribute)
WHERE client_engine NOT IN ('javascript-sdk', 'js', 'react-sdk')
  AND attribute.name = '$opt_user_agent'

UNION ALL

-- For Conversions table
SELECT 
    'Conversions' AS table_name, 
    COUNT(*) AS row_count
FROM events
CROSS JOIN UNNEST(attributes) AS t (attribute)
WHERE client_engine NOT IN ('javascript-sdk', 'js', 'react-sdk')
  AND attribute.name = '$opt_user_agent';

This query will return how many events Optimizely received with bot filtering enabled. If this query returns 0, this means you didn’t have bot filtering enabled.

To enable it, see heading above: “Make sure to enable bot filtering”.

Check if you aren’t leaking MAUs

SELECT 
    'Decisions' AS table_name, 
    COUNT(DISTINCT visitor_id) AS MAU
FROM decisions
WHERE  timestamp BETWEEN TIMESTAMP '2024-10-02 00:00:00' AND TIMESTAMP '2024-10-06 00:00:00'

UNION ALL

SELECT 
    'Conversions' AS table_name, 
    COUNT(DISTINCT visitor_id) AS MAU
FROM events
WHERE timestamp BETWEEN TIMESTAMP '2024-10-02 00:00:00' AND TIMESTAMP '2024-10-06 00:00:00';

PS: don’t forget to replace the timestamp interval with your billing cycle.

Since MAUs can be coming from experiment decisions (decide API) or from experiment conversions (trackEvent API), it is often valuable to understand if a large portion of MAUs are coming from either decisions or conversions. The query above allows us to understand where MAUs are primarily coming from.

If the query returns that the majority of MAUs come from decisions, check your usage of decide API calls. You are likely sending non-consistent or non-unique userIds for the same visitor.

If the query returns that the majority of MAUs come from conversions, you likely have leaked trackEvent API calls; this means trackEvent calls for visitors that are never getting bucketed into any experiments. Check your codebase to ensure you don’t have trackEvent API calls that don’t belong to any flow where you would expect experiments to be activated.

Step 3: use the same userId everywhere

Using different user IDs for Optimizely Feature Experimentation (FX) and Web Experimentation can lead to double-counting Monthly Active Users (MAUs).

When a user interacts with both platforms but is assigned separate user IDs, each interaction is recorded as a unique MAU for both systems, inflating the overall count.

Web will use by default a self-generated cookie called optimizelyEndUserId, whereas FX will use whatever userId you’ve decided to use.

So in this default setup, double-counting will occur.

To avoid this issue, it is recommended to implement a consistent user ID strategy across both products. By using the same user ID for both FX and Web, organizations can ensure accurate MAU measurement without the confusion of overcounting active users.

If you have an anonymous ID available from first-page load, you should use it for both Web and FX. If not, we often piggyback to your analytics tool’s userID (for GA, that is a cookie called _ga). Note that since this cookie is generated client-side, it may not be available for Optimizely to use on first page load.

Conclusion

That’s it – these 3 steps should ensure you’re able to diagnose your MAU issues and align it with your internal data.

Leave a Comment