Whether you’re debugging an experiment, conducting QA testing, or demoing specific variations to stakeholders, you’ll inevitably need to override Optimizely’s default bucketing behavior.
The challenge? Web and Feature Experimentation handle variation overrides differently, and the documentation scattered across multiple sources can make it confusing to know which method to use when.
This comprehensive guide covers every scenario where you might need to override variation assignments, with practical examples and implementation details for both Optimizely products.
By the end, you’ll have a complete toolkit for controlling user assignment during testing, QA, and demonstration scenarios.
Understanding variation override scenarios
Before diving into implementation details, let’s establish the common scenarios where overriding variation assignments becomes essential for successful experiment management.
QA testing requirements
Quality assurance engineers need to validate that each variation functions correctly before experiments reach production traffic. This requires the ability to force specific variations regardless of targeting conditions or traffic allocation percentages.
Common QA scenarios include:
- Testing conversion tracking for each variation
- Validating UI changes across different user segments
- Ensuring feature flags behave correctly in edge cases
- Verifying analytics events fire properly for each variation
Stakeholder demos and previews
Product managers and executives often need to preview specific variations without meeting targeting criteria. This is especially important when demonstrating potential changes that might only target specific user segments or geographic regions.
Demo scenarios require:
- Consistent variation display across multiple sessions
- Ability to switch between variations during presentations
- Access to draft experiments before they go live
- Bypassing audience conditions that stakeholders don’t meet
Debug and troubleshooting
When experiments behave unexpectedly, developers need isolation tools to identify whether issues stem from specific variations, targeting logic, or implementation problems.
Debugging often involves:
- Isolating variation-specific JavaScript errors
- Testing edge cases in variation behavior
- Comparing baseline vs treatment under identical conditions
- Validating experiment logic with known test conditions
Web Experimentation override methods
Optimizely Web Experimentation provides several methods for overriding default bucketing behavior, from simple URL parameters to programmatic JavaScript API control.
Force variation query parameters
The simplest method for forcing variations in Web Experimentation uses URL query parameters. This approach works immediately without code changes and is perfect for QA testing and stakeholder demos.
Basic syntax:
https://yoursite.com/?optimizely_x=VARIATION_ID
To force multiple variations across different experiments, use comma-separated variation IDs:
https://yoursite.com/?optimizely_x=1234567890,9876543210
You can also force yourself into specific audiences using the audience parameter:
https://yoursite.com/?optimizely_x_audiences=AUDIENCE_ID
Debug variations by forcing behavior with query parameters
Important limitations:
- Forcing variations disables all other experiments running on the page
- You must still meet URL targeting and activation conditions
- Variation mapping persists in localStorage for subsequent visits
Note: Force parameters are disabled by default for privacy. Enable them in Settings > Implementation > Privacy.
Preview tools and share links
Optimizely’s built-in preview tools generate shareable links that include the necessary parameters automatically. Access these through the experiment’s “Preview” menu in the Optimizely application.
Preview links provide:
- Automatic parameter generation
- Consistent sharing across team members
- Direct access without manual URL construction
- Integration with draft experiment previews
JavaScript API override methods
For programmatic control, Web Experimentation provides JavaScript API methods to check experiment state and variation assignments. While these methods don’t directly force variations, they enable conditional logic based on current assignments.
// Check if user is in specific variation
if (window.optimizely && window.optimizely.get) {
const activeExperiments = window.optimizely.get('state').getActiveExperimentIds();
const variationMap = window.optimizely.get('state').getVariationMap();
// Log current experiment state
console.log('Active experiments:', activeExperiments);
console.log('Variation assignments:', variationMap);
}
The JavaScript API is particularly useful for:
- Logging current experiment state for debugging
- Implementing custom analytics based on variation assignments
- Creating conditional logic that responds to experiment participation
Draft and paused experiment access
To preview draft or paused experiments, combine the force variation parameter with the public token:
https://yoursite.com/?optimizely_x=VARIATION_ID&optimizely_token=PUBLIC
This approach enables stakeholder previews of experiments before they go live, but requires enabling specific privacy settings in your Optimizely project configuration. Debug variations by forcing behavior with query parameters
Feature Experimentation override methods
Feature Experimentation provides more sophisticated override mechanisms through SDK-based methods, offering greater precision and control over user assignment behavior.
Forced decision methods deep dive
The Forced Decision API provides the most powerful method for overriding variation assignments in Feature Experimentation. These methods let you force specific users into specific variations regardless of audience conditions and traffic allocation.
Complete implementation example:
const { createInstance } = require('@optimizely/optimizely-sdk');
const optimizely = createInstance({
sdkKey: 'YOUR_SDK_KEY'
});
if (optimizely) {
optimizely.onReady().then(({ success, reason }) => {
if (!success) {
throw new Error(reason);
}
const user = optimizely.createUserContext('qa_user_123', {
logged_in: true,
environment: 'staging'
});
// Force user into specific variation for QA testing
const result = user.setForcedDecision(
{ flagKey: 'checkout_flow_experiment' },
{ variationKey: 'new_checkout_design' }
);
// Make decision - will return forced variation
const decision = user.decide('checkout_flow_experiment');
console.log('Forced variation:', decision.variationKey);
// Remove forced decision after testing
user.removeForcedDecision({ flagKey: 'checkout_flow_experiment' });
// Or remove all forced decisions
user.removeAllForcedDecisions();
});
}
Forced Decision methods for the Java SDK (Note: Similar implementation across SDKs)
Advanced forced decision scenarios:
// Force decision for specific experiment rule
user.setForcedDecision(
{ flagKey: 'feature_flag', ruleKey: 'ab_test_experiment' },
{ variationKey: 'treatment_b' }
);
// Force decision for delivery rule (feature flags)
user.setForcedDecision(
{ flagKey: 'feature_flag', ruleKey: 'delivery_rule' },
{ variationKey: 'feature_enabled' }
);
User allowlisting configuration
Allowlisting provides a UI-based method for QA testing by configuring specific users to bypass targeting conditions. This method supports up to 50 users per experiment and is ideal for consistent stakeholder access.
Allowlisting workflow:
- Navigate to your experiment in the Optimizely application
- Access the “Allowlist” section under experiment settings
- Add user IDs that should bypass normal targeting
- Assign specific variations to each allowlisted user
- Save configuration and deploy experiment
Allowlisting is particularly effective for QA teams because:
- No code changes required
- Consistent behavior across sessions
- Easy management through Optimizely UI
- Supports multiple QA environments
According to the documentation, allowlisted users “bypass audience targeting and traffic allocation to see the chosen variation” while non-allowlisted users must pass normal targeting criteria. Allowlisting in Feature Experimentation
Custom bucketing IDs
Bucketing IDs enable advanced scenarios where you want to decouple user identification from bucketing logic. This is particularly useful for shared devices or when testing specific user scenarios.
Implementation example:
// Create user context with custom bucketing ID
const user = optimizely.createUserContext('actual_user_id', {
// Use device ID for bucketing instead of user ID
'$opt_bucketing_id': 'shared_device_123',
device_type: 'apple_tv',
location: 'living_room'
});
// All users on this device will see the same variation
const decision = user.decide('shared_device_experiment');
Use cases for custom bucketing IDs include:
- Shared device experimentation (smart TVs, kiosks)
- Household-level experiments
- QA testing with consistent device behavior
- Geographic or location-based consistency
Assign variations with bucketing ids
User profile service integration
User Profile Service provides sticky bucketing that persists variation assignments across sessions. While primarily used for production consistency, it can be leveraged for QA scenarios requiring persistent assignments.
Basic implementation:
const userProfileService = {
lookup: (userId) => {
// Return stored profile or null
const profile = getStoredProfile(userId);
return profile;
},
save: (userProfile) => {
// Persist the user profile
storeProfile(userProfile);
}
};
const optimizely = createInstance({
sdkKey: 'YOUR_SDK_KEY',
userProfileService: userProfileService
});
The documentation notes that the User Profile Service “will override Optimizely Feature Experimentation’s default bucketing behavior in cases when an experiment assignment has been saved.” User profile service in Optimizely Feature Experimentation
Bucketing hierarchy and precedence rules
Understanding how Optimizely evaluates competing override methods is crucial for avoiding conflicts and ensuring predictable behavior across testing environments.
Override method evaluation order
Feature Experimentation follows a strict hierarchy when multiple override methods are present. According to the official bucketing documentation, the complete evaluation order is:
- Forced Decision methods (setForcedDecision)
- User allowlisting (UI-configured overrides)
- User Profile Service (sticky bucketing persistence)
- Audience targeting (conditional logic evaluation)
- Exclusion groups (mutual exclusion rules)
- Traffic allocation (percentage-based distribution)
How bucketing works in Optimizely Feature Experimentation
The critical rule: “If there is a conflict over how a user should be bucketed, then the first user-bucketing method to be evaluated overrides any conflicting method.” Allowlisting in Feature Experimentation
Conflict resolution strategies
When multiple override methods are active, Optimizely resolves conflicts by applying the highest-priority method and ignoring lower-priority conflicting rules.
Example conflict scenarios:
// Scenario 1: Forced Decision vs Allowlist
// Forced Decision wins - user sees 'treatment_a'
user.setForcedDecision(
{ flagKey: 'experiment' },
{ variationKey: 'treatment_a' }
);
// Allowlist configured for 'treatment_b' is ignored
// Scenario 2: Allowlist vs User Profile Service
// Allowlist wins - user sees allowlisted variation
// Previous sticky assignment from UPS is ignored
// Scenario 3: User Profile Service vs Audience Targeting
// UPS wins - user sees previously assigned variation
// Even if user no longer meets audience criteria
Best practices for multiple overrides
To maintain predictable behavior and avoid conflicts:
- Document all active override methods – Maintain a QA checklist of forced decisions, allowlists, and UPS configurations
- Use environment-specific approaches – Apply forced decisions in staging, allowlisting in QA, and clean UPS in production
- Clean up after testing – Remove forced decisions and reset allowlists before production launches
- Test override precedence – Verify that the intended override method takes precedence
Implementation examples and code snippets
Here are practical, copy-paste examples for common override scenarios across both Optimizely products.
Web Experimentation code examples
QA Testing with URL Parameters:
// Generate QA testing URLs programmatically
function generateQAUrls(baseUrl, experimentConfig) {
const qaUrls = {};
experimentConfig.variations.forEach(variation => {
const qaUrl = `${baseUrl}?optimizely_x=${variation.id}&optimizely_force_tracking=true`;
qaUrls[variation.name] = qaUrl;
});
return qaUrls;
}
// Usage example
const qaUrls = generateQAUrls('https://mysite.com/checkout', {
variations: [
{ id: '1234567890', name: 'baseline' },
{ id: '9876543210', name: 'new_design' }
]
});
console.log(qaUrls);
// Output:
// {
// baseline: "https://mysite.com/checkout?optimizely_x=1234567890&optimizely_force_tracking=true",
// new_design: "https://mysite.com/checkout?optimizely_x=9876543210&optimizely_force_tracking=true"
// }
Debugging with JavaScript API:
// Comprehensive experiment state logging
function debugOptimizelyState() {
if (typeof window.optimizely === 'undefined') {
console.log('Optimizely not loaded');
return;
}
const state = {
activeExperiments: window.optimizely.get('activeExperiments'),
variationMap: window.optimizely.get('variationMap'),
visitor: window.optimizely.get('visitor'),
visitorId: window.optimizely.get('visitorId')
};
console.table(state);
// Check specific experiment participation
const experimentId = '12345678';
const isActive = state.activeExperiments.includes(experimentId);
const variation = state.variationMap[experimentId];
console.log(`Experiment ${experimentId}:`, {
active: isActive,
variation: variation
});
}
// Run after page load
window.addEventListener('load', debugOptimizelyState);
Feature Experimentation SDK examples
Comprehensive Forced Decision Testing:
class ExperimentTester {
constructor(optimizelyClient) {
this.client = optimizelyClient;
this.testUsers = new Map();
}
// Create test user with forced decisions
createTestUser(userId, forcedDecisions = []) {
const user = this.client.createUserContext(userId, {
environment: 'qa',
tester: true
});
// Apply all forced decisions
forcedDecisions.forEach(({ flagKey, ruleKey, variationKey }) => {
const context = ruleKey ? { flagKey, ruleKey } : { flagKey };
user.setForcedDecision(context, { variationKey });
});
this.testUsers.set(userId, user);
return user;
}
// Test all variations of a flag
testAllVariations(flagKey, variations) {
const results = {};
variations.forEach(variationKey => {
const testUserId = `qa_${flagKey}_${variationKey}_${Date.now()}`;
const user = this.createTestUser(testUserId, [
{ flagKey, variationKey }
]);
const decision = user.decide(flagKey);
results[variationKey] = {
userId: testUserId,
decision: decision,
variableValues: decision.variables
};
});
return results;
}
// Cleanup all test users
cleanup() {
this.testUsers.forEach(user => {
user.removeAllForcedDecisions();
});
this.testUsers.clear();
}
}
// Usage example
const tester = new ExperimentTester(optimizelyClient);
// Test checkout experiment variations
const checkoutResults = tester.testAllVariations('checkout_flow', [
'baseline',
'express_checkout',
'guest_checkout'
]);
console.log('Checkout experiment test results:', checkoutResults);
// Cleanup when done
tester.cleanup();
Environment-Specific Configuration:
// Environment-aware override configuration
class EnvironmentOverrides {
constructor(environment) {
this.environment = environment;
this.overrides = this.getEnvironmentOverrides();
}
getEnvironmentOverrides() {
switch (this.environment) {
case 'development':
return {
forceAllExperiments: true,
defaultVariation: 'treatment',
enableLogging: true
};
case 'staging':
return {
useAllowlisting: true,
qaUserIds: ['qa_user_1', 'qa_user_2', 'product_manager'],
enableLogging: true
};
case 'production':
return {
useUserProfileService: true,
enableLogging: false,
strictTargeting: true
};
default:
return {};
}
}
applyOverrides(userContext, flagKey) {
const config = this.overrides;
if (config.forceAllExperiments) {
userContext.setForcedDecision(
{ flagKey },
{ variationKey: config.defaultVariation }
);
}
if (config.enableLogging) {
const decision = userContext.decide(flagKey);
console.log(`Flag ${flagKey} decision:`, decision);
}
return userContext;
}
}
// Initialize based on environment
const env = process.env.NODE_ENV || 'development';
const overrides = new EnvironmentOverrides(env);
// Apply to user context
const user = optimizelyClient.createUserContext('user_123');
overrides.applyOverrides(user, 'feature_flag_name');
Testing and QA workflows
Establishing systematic workflows ensures that override methods support rather than complicate your testing processes.
QA environment setup
Structure your QA environments to isolate override testing from production data and ensure consistent, repeatable test conditions.
Environment separation strategy:
- Development environment – Use forced decisions for rapid iteration and debugging
- Staging environment – Configure allowlisting for stakeholder demos and comprehensive QA
- Pre-production environment – Test with production-like targeting and minimal overrides
- Production environment – Use only User Profile Service for legitimate sticky bucketing
Automated testing integration
Incorporate override methods into automated test suites to ensure consistent variation testing in CI/CD pipelines.
Jest test example:
// Example using Jest for automated variation testing
describe('Checkout Flow Experiment', () => {
let optimizelyClient;
let testUser;
beforeEach(async () => {
optimizelyClient = createInstance({
sdkKey: process.env.OPTIMIZELY_SDK_KEY
});
await optimizelyClient.onReady();
testUser = optimizelyClient.createUserContext('test_user', {
environment: 'test'
});
});
afterEach(() => {
testUser.removeAllForcedDecisions();
});
test('baseline variation shows standard checkout', () => {
testUser.setForcedDecision(
{ flagKey: 'checkout_flow' },
{ variationKey: 'baseline' }
);
const decision = testUser.decide('checkout_flow');
expect(decision.variationKey).toBe('baseline');
expect(decision.variables.showExpressCheckout).toBe(false);
});
test('express variation shows simplified checkout', () => {
testUser.setForcedDecision(
{ flagKey: 'checkout_flow' },
{ variationKey: 'express_checkout' }
);
const decision = testUser.decide('checkout_flow');
expect(decision.variationKey).toBe('express_checkout');
expect(decision.variables.showExpressCheckout).toBe(true);
});
});
Next steps
You now have a complete toolkit for overriding variation assignments across both Web and Feature Experimentation. The key to success lies in choosing the right method for each scenario:
- For quick QA testing: Use URL parameters (Web) or forced decisions (Feature)
- For stakeholder demos: Use preview tools (Web) or allowlisting (Feature)
- For debugging: Use JavaScript API (Web) or forced decisions with logging (Feature)
- For production consistency: Use User Profile Service across both products
Remember to always document your override configurations, clean up after testing, and verify that production experiments run with intended targeting settings.
What override challenges have you encountered in your experimentation workflows? Are there specific scenarios where you’ve needed creative solutions beyond these standard methods? Share your experiences in the comments below!