Skip to main content

Experiments

This document describes how AB and multi-variant experiments are implemented across products. This document is not about how to use experiment platforms themselves.

Description

Currently we use Optimizely for running experiments across our products.

https://app.optimizely.com/v2/projects/4257410043

This is owned by the UX team, they control access to this platform. Experiment results can be found in the Experiment Hub also owned by the UX team

https://cancerresearchuko365.sharepoint.com/sites/TECH-experimentationhub

At the moment because Optimizely is considered essential for feature switching, it is not considered as performance tracking so it is not blocked by OneTrust cookie settings, this may change in the future.

This Optimizely be implemented in two different ways:

1. Legacy Support

Each app loads the Optimizely agent in script tag.

<script src="https://cdn-pci.optimizely.com/js/4145825987.js"></script>

The contents of this bundle will change depending on how many experiments are running, all variation and targeting is controlled by the Optimizely platform which uses JS and CSS injection to modify the application when running the experiments. If you are implementing a Content Security Policy you will need to make allowances for Optimizely to function.

2. Optimizely Rollouts / Fullstack

The main concept here is flag based feature switching and experimentation. This can happen both server side and client side, where the experiment variations live in the codebase but the experiment bucketing and triggering is controlled Optimizely.

Using Optimizely's JS or React SDKs Optimizely is initialised with the our account id and a user id, if you don't have a logged in user ID you will have to create a user ID somehow, the example below is creating a random ID you can use a library like https://www.npmjs.com/package/uniqid and saving it to local storage

export const OptimizelyWrapper: FC<{}> = ({ children }) => {
const client: ReactSDKClient = getOptimizelyInstance();
const [fundraiserState] = useFundraiserContext();
const { fundraiser } = fundraiserState;
const [optimizelyId, setOptimizelyId] = useLocalStorage(
"optimizely",
"logged_out",
);

useLayoutEffectBrowser(() => {
if (optimizelyId === "logged_out" && !fundraiser?.uniqueId)
setOptimizelyId(generateUniqueId());
}, []);

const userID = fundraiser?.uniqueId || optimizelyId;

return (
<>
{client ? (
<OptimizelyProvider
optimizely={client}
user={{
id: userID,
attributes: {
user_id: userID,
is_logged_in: !!fundraiser?.uniqueId,
},
}}
isServerSide={!isBrowser}
>
{children}
</OptimizelyProvider>
) : (
<>{children}</>
)}
</>
);
};

export default OptimizelyWrapper;

the provider wraps the application

<OptimizelyWrapper>
<App />
</OptimizelyWrapper>

Then in the component running the experiment you can use the decision hook as below

import { useDecision } from "@optimizely/react-sdk";
const HomePage = () => {
const [decision] = useDecision("flag_name_for_experiment");

return (
<>
{decision.enabled ? (
<Text>{`Feature enabled!`}</Text>
) : (
<Text>{`Feature NOT enabled!`}</Text>
)}
</>
);
};

Rationale

The whole point of running experiments is to validate our designs and user experience and to maximise value for the business

Flag based testing is the preferred method because it's faster and less dangerous in terms of JS injection and the fact that the developers control the code variations not the UX Designers.

References & Further Reading