Overview

This page is a full reference for Optimizely's SDKs.

Read below to learn how to install one of our SDKs and run experiments in your code. You can use the toggles on the upper-right to see implementations in other languages.


If you're just getting started, we recommend the links below:


Example usage

The code at right illustrates the basic usage of the SDK to run an experiment in your code.

First, you need to initialize an Optimizely client based on your Optimizely project's datafile.

To run an experiment, you'll want to activate the experiment at the point you want to split traffic in your code. The activate function returns which variation the current user is assigned to. The activate function also sends an event to Optimizely to record that the current user has been exposed to the experiment. In this example, we've created an experiment my_experiment with two variations control and treatment.

You'll also want to track events for your key conversion metrics. In this example, there is one conversion metric, my_conversion. The track function can be used to track events across multiple experiments, and will be counted for each experiment only if activate has previously been called for the current user.

Note that the SDK requires you to provide your own unique user IDs for all of your activate and track calls. See User IDs for more details and best practices on what user IDs to provide.


Installation


Initialization

The SDK defines an object — the Optimizely client — to simplify your interface with Optimizely. You can use the client to both activate experiments and track events.

Within your code, the client represents the state of your Optimizely project, so changes in your project will reflect in the client. The client accomplishes this by requiring you to provide your project's datafile during initialization.


Initialization methods


Create a manager


Create a client


Retrieve a client

Datafile bundling


Datafile polling


Datafile

The SDKs are required to read and parse a datafile at initialization.

The datafile is in JSON format and compactly represents all of the instructions needed to activate experiments and track events in your code without requiring any blocking network requests. For example, the datafile displayed on the right represents an example project from the Getting started guide and the basic usage above.

Unless you are building your own SDK, there shouldn't be any need to interact with the datafile directly. Our SDKs should provide all the interfaces you need to interact with Optimizely after parsing the datafile.

Access datafile via CDN

You can fetch the datafile for your Optimizely project from Optimizely's CDN. For example, if the ID of your project is 12345 you can access the file at the link below:

https://cdn.optimizely.com/json/12345.json

You can easily access this link from your Optimizely project settings.

Access datafile via REST API

Alternatively, you can also access the datafile via Optimizely's authenticated REST API. For example, if the ID of your project is 12345 you can access the file at:

https://www.optimizelyapis.com/experiment/v1/projects/12345/json

Please note that as with other requests to the REST API, you will have to authenticate with an API token and use the preferred request library of your language.


Synchronization

To stay up to date with your experiment configuration in Optimizely, the SDK needs to periodically synchronize its local copy of the datafile.


Webhooks

If you are managing your datafile from a server-side application, we recommend configuring webhooks to maintain the most up-to-date version of the datafile. Your supplied endpoint will be sent a POST request whenever the respective project is modified. Anytime the datafile is updated, you must re-instantiate the Optimizely object in the SDK for the changes to take affect.

You can setup a webhook in your project settings and add the URL the webhook service should ping.

The webhook payload structure is shown at right. As of today, we support one event type, project.datafile_updated.


Securing webhooks

When you create a webhook, Optimizely generates a secret token that is used to create a hash signature of webhook payloads. Webhook requests include this signature in a header X-Hub-Signature that can be used to verify the request originated from Optimizely.

You will only be able to view a webhook's secret token once immediately after creation. If you forget a webhook's secret token, you'll need to regenerate it on your project settings page.

The X-Hub-Signature header contains a SHA1 HMAC hexdigest of the webhook payload, using the webhook's secret token as the key and prefixed with sha1=. While the way you verify this signature will vary depending on the language of your codebase, we've provided a Flask reference implementation.

Note: We strongly recommend that you use a constant time string comparison function such as Python's hmac.compare_digest or Rack's secure_compare instead of the == operator when verifying webhook signatures. This prevents timing analysis attacks.


Versioning

For iOS, tvOS and Android projects, the datafile is versioned so that we can maintain backwards compatibility with SDKs that have been released out into the wild with older versions of the datafile. This will ensure that in the event our datafile schema changes, experiments will still run in apps that have not been updated with the latest version of the SDK.

All versions of the datafile for your Optimizely project are accessible via the CDN. For example, if the ID of your project is 12345 you can access v3 the datafile at the link below:

https://cdn.optimizely.com/public/12345/datafile_v3.json

We will upload every supported version of the datafile to our CDN so that SDKs that are pegged to an older version will be able to retrieve a datafile that is compatible with that SDK.

If you are using OPTLYManager (for iOS or tvOS) or OptimizelyManager (for Android) to manage datafile updates, these will gracefully handle datafile versioning. However, if you are managing the datafile on your own, then you have to make sure to fetch the correct version of the datafile according to the SDK version you are using.


Feature Management

Feature Management is a set of new capabilities in Full Stack 2.0. Feature Management provides several new tools that make it easy to integrate your feature development process with experimentation:

  • Create Features in Optimizely that map to features you're developing in your application, then toggle your features on/off with Feature Flags.
  • Create Feature Configurations to parameterize your features with variables, then set your variable values using feature tests and feature rollouts.
  • Use Feature Tests to run A/B tests on your features. Feature tests have all the same capabilities as traditional Full Stack A/B tests.
  • Use Feature Rollouts to launch your features to a percentage of your traffic after you've found your optimal feature configuration.

Feature tests and feature rollouts are both triggered using the isFeatureEnabled method. Create a feature in Optimizely, deploy the feature behind the isFeatureEnabled flag, then turn the feature on/off for users by running feature tests and rollouts.

Please note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely sends an impression so that information is recorded in your test results.
  • When a user is assigned to a feature rollout, Optimizely does not send an impression. This avoids creating extra network traffic.
  • If there are both feature test(s) and a feature rollout running on a feature, the test(s) will be evaluated first.

For more information about using Feature Management, see the following articles:


Feature Flags


Get Feature Configuration

A Feature Configuration is a set of variables associated with a feature that you can set as part of a feature test or feature rollout. Defining feature variables allows you to iterate on your feature in between code deploys. Run feature tests to determine the optimal combination of variable values, then set those values as your default feature configuration and launch using a rollout.

Use the following methods to get the value of a feature variable for a particular datatype.


Get Enabled Features


Experiments

Full Stack supports three types of experiments:

  • A/B tests help you determine which of two or more one-off changes produces the best results.
  • Feature tests help you iterate on new features.
  • Feature rollout enable you to launch new features safely.

The following sections describe each experiment type in detail.


A/B Tests

A/B tests are designed to answer one-off questions: which of two (or more) experiences performs best? Use the activate function to run an A/B test at any point in your code.

The activate function requires an experiment key and a user ID. The experiment key should match the experiment key you created when you set up the experiment in the Optimizely web portal. The user ID is a string that uniquely identifies the participant in the experiment (read more in User IDs below).

The activate function returns which variation the current user is assigned to. If the A/B test is running and the user satisfies its audience conditions, the function returns a variation based on a deterministic murmur hash of the provided experiment key and user ID. This function also respects whitelisting and and user profiles.

Make sure that your code adequately deals with the case when the A/B test is not activated (i.e., it should execute the default variation).

The activate function also sends an event to Optimizely to record that the current user has been exposed to the A/B test. You should call activate at the point you want to record an A/B test exposure to Optimizely. If you don't want to record an A/B test exposure, you can use an alternative function below to get the variation.


Feature Tests

Feature tests allow you to iterate on the feature you develop. Use the isFeatureEnabled flag to trigger a feature test at any point in your code.

Feature tests do not require using the activate method. If a feature test is active for a given user, isFeaturEnabled does the same work that activate does for a traditional A/b test. It will trigger an impression, post to notification listeners, and return the correct values for your feature configuration.

The isFeatureEnabled method requires a feature_key, user_ID, and (optionally) attributes. The feature key is defined from the Features dashboard, as described in Feature tests: Experiment on features. The user ID string uniquely identifies the participant in the experiment (read more in User IDs below).

The isFeatureEnabled method does not return the variation assigned to a user. Instead, it returns true or false as determined by whether the feature is active for this user. The objective of this design is to separate the implementation of a feature from the experiments run on a feature. Just deploy your feature behind this flag, then allow your tests and rollouts to determine whether the feature is live.

While feature tests run using isFeatureEnabled do not require including an experiment key or variation keys in your application, those fields still exist in the Optimizely UI for organizational and naming convenience.

Please note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely kicks off an impression so that information is recorded in the test results.
  • When a user is assigned to a feature rollout, Optimizely does not kick off an impression. This avoids creating extra network traffic that could impact users.
  • If there are both feature test(s) and a feature rollout running on a feature, the test(s) will be evaluated first.


Feature Rollouts

Feature Rollouts allow you to launch a feature to a subset of users in order to mitigate risk, or to launch the winning feature configurations you find through your feature tests. Use rollouts to expose a feature to a percentage of your traffic, to one or more audiences, or to a percentage of your targeted audience(s). The rollout's traffic allocation to determines the percentage of eligible users who see your feature.

Please note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely kicks off an impression so that information is recorded in the test results.
  • When a user is assigned to a feature rollout, Optimizely does not kick off an impression. This avoids creating extra network traffic that could impact users.
  • If there are both feature test(s) and a feature rollout running on a feature, the test(s) will be evaluated first.


User attributes

If you'd like to be able to segment your experiment data based on attributes of your users, you should include the optional attributes argument when you call activate or isFeatureEnabled. Optimizely will include these attributes in the recorded event data so you can segment them on the Optimizely results page.

Passing attributes will also allow you to target your experiments to a particular audience you've defined in Optimizely. If the provided experiment is targeted to an audience, Optimizely will evaluate whether the user falls in an audience that is associated with the experiment before bucketing.

For more information on managing audiences and attributes, see our Optiverse article.

Variation assignments

If you would like to retrieve the variation assignment for a given experiment and user without sending a network request to Optimizely, use the code shown on the right. This function has identical behavior to activate except that no event is dispatched to Optimizely.

You may want to use this if you're calling activate in a different part of your application, or in a different part of your stack, and you want to fork code for your experiment in multiple places. It is still necessary to call activate at least once to register that the user was exposed to the experiment.


Exclusion groups

You can use Exclusion Groups to keep your experiments mutually exclusive and eliminate interaction effects. For more information on how to set up exclusion groups in the Optimizely UI, see our Optiverse article.

Experiments that are part of a group have the exact same interface in the SDK: you can call activate and track like any other experiment. The SDK will ensure that two experiments in the same group will never be activated for the same user.


Events

You can easily track conversion events from your code using the track function.

The track function requires an event key and a user ID. The event key should match the event key you provided when setting up events in the Optimizely web portal. The user ID should match the user ID provided in the activate function.

The track function can be used to track events across multiple experiments, and will be counted for each experiment only if activate has previously been called for the current user.

To enable segmentation of metrics on the Optimizely results page, you'll also need to pass the same user attributes you used in the activate call.

For offline event tracking and other advanced use cases, you can also use the Event API.

Note: The Optimizely experiment results page will only count events that are tracked after activate has been called. If you are not seeing results on the Optimizely results page, make sure that you are calling activate before tracking conversion events.


Event tags

Event tags are contextual metadata about conversion events that you track.

You can use event tags to attach any key/value data you wish to events. For example, for a product purchase event you may want to attach a product SKU, product category, order ID, and purchase amount. Event tags can be strings, integers, floating point numbers, or Boolean values.

You can include event tags with an optional argument in track as shown on the right.

Event tags are distinct from user attributes which should be reserved for user-level targeting and segmentation. Event tags do not affect audiences or the Optimizely results page, and do not need to be registered in the Optimizely web interface.

Event tags are accessible via raw data export in the event_features column. You should include any event tags you need to reconcile your conversion event data with your data warehouse.

Reserved tags

The following tag keys are reserved and will be included in their corresponding fields in the Optimizely event API payload. They're bundled into event tags for your convenience. Use them if you'd like to benefit from specific reporting features such as revenue metrics or numeric metrics.

  • revenue - An integer value that is used to track the Revenue metric for your experiments, aggregated across all conversion events. Revenue is recorded in cents: if you'd like to record a revenue value of $64.32 use 6432. Any event you want to track revenue for will need to be added as a Metric to your experiment. You can use the Overall Revenue Metric to aggregate multiple Metrics tracking separate revenue events. The Overall Revenue Event won't track any revenue unless there are other Metrics in your experiment tracking an increase or decrease in total revenue – it won't work on its own.

  • value - A floating point value that is used to track a custom value for your experiments.


Tracking with other SDKs

You can use any of our SDKs to track events, so you can run experiments that span multiple applications, services, or devices. All of our SDKs have the same bucketing and targeting behavior so you'll see the exact same output from experiment activation and tracking, provided you are using the same datafile and user IDs.

For example, if you're running experiments on your server you can activate experiments with our Python, Java, Ruby, C#, Node, or PHP SDKs, but track user actions client-side using our JavaScript, Objective-C or Android SDKs.

If you plan on using multiple SDKs for the same project, make sure that all SDKs are sharing the same datafile and user IDs.


User IDs

User IDs are used to uniquely identify the participants in your experiments. You can supply any string you wish for user IDs depending on your experiment design.

For example, if you're running experiments on anonymous users, you can use a 1st party cookie or device ID to identify each participant. If you're running experiments on known users, you can use a universal user identifier (UUID) to identify each participant. If you're using UUIDs then you can run experiments that span multiple devices or applications and ensure that users have a consistent treatment.

User IDs don't necessarily need to correspond to individual users. If you're running experiments in a B2B SaaS application, you may want to pass account IDs to the SDK to ensure that every user in a given account has the same treatment.

Below are some additional tips for using user IDs.

  • Ensure user IDs are unique: It is essential that user IDs are unique among the population you are using for experiments. Optimizely will bucket users and provide experiment metrics based on this user ID that you provide.

  • Anonymize user IDs: The user IDs you provide will be sent to Optimizely servers exactly as you provide them. You are responsible for anonymizing any personally identifiable data such as email addresses in accordance with your company's policies.

  • Use IDs from 3rd party platforms: If you are measuring the impact of your experiments in a 3rd party analytics platform, we recommend leveraging the same user ID from that platform. This will help ensure data consistency and make it easier to reconcile events between systems.

  • Use one namespace per project: Optimizely generally assumes a single namespace of user IDs for each project. If you are using multiple different conventions for user IDs in a single project (e.g. anonymous visitor IDs for some experiments and UUIDs for others) then Optimizely will be unable to enforce rules such as mutual exclusivity between experiments.

  • Use either logged-out vs. logged-in IDs: We do not currently provide a mechanism to alias logged-out IDs with logged-in IDs. If you are running experiments that span both logged-out and logged-in states (e.g. experiment on a signup funnel and track conversions after the user has logged in), you must persist logged-out IDs for the lifetime of the experiment.


Bucketing IDs ʙᴇᴛᴀ

Bucketing ID is a beta feature intended to support customers who want to assign variations with different identifier than they use to count visitors. For example, a company might want to assign variations by account ID while counting visitors by user ID. We're investigating the implications of Bucketing IDs on results analysis, and we'd love your feedback! If you want to participate in this beta release, open a ticket with our support team.

By default, Optimizely assigns users to variations, i.e., Optimizely "buckets" users, based on submitted user IDs. You can change this behavior by including a bucketing ID.

With a bucketing ID, you decouple user bucketing from user identification. Users who have the same bucketing ID are put into the same bucket and are exposed to the same variation.

Using a bucketing ID does not affect the user ID. Event data submissions will continue to include user IDs. With the exception of assigning users to specific variations, features that rely on user IDs behave the same regardless of the presence of a separate bucketing ID. If you do not pass a bucketing ID to the attributes parameter, users will be bucketed by user IDs, which is the default method.

Notes

  • The bucketing ID is not persisted.
  • Bucketing IDs are not compatible with Optimizely's Rollouts feature.

QA & Debugging

There are several tools available for QAing experiments and debugging the SDK's behavior.


Whitelists

You can use Whitelisting to force users into specific variations for QA purposes. For more information on how to set up whitelists in the Optimizely UI, see our Optiverse article.

Whitelists are included in your datafile in the forcedVariations field. You don't need to do anything differently in the SDK; if you've set up a whitelist, experiment activation will force the variation output based on the whitelist you've provided. Whitelisting overrides audience targeting and traffic allocation. Whitelisting does not work if the experiment is not running.


Forced Bucketing

The forced bucketing feature allows you to force a user into a variation by calling a method in the SDK. This feature is particularly useful for the purpose of testing as it allows you to set the variation on the client in real time, eliminating the uncertainty and latency of a datafile download.

Forced bucketing is similar to whitelisting in that it allows you to force a user into a specific variation. However, what differentiates forced bucketing from whitelisting is that you set the variation in the SDK and not on the Optimizely web dashboard (therefore, eliminating the dependency on a datafile download) and you are not limited to how many users can be forced into a variation as you are when whitelisting. It is important to note that the forcedVariations field in the datafile is only related to whitelisted variations and not to variations set by this API.

The example code demonstrates how to use the forced bucketing API. An experiment key, user ID, and variation key are passed into the set method. The variation set will be cached and used by all SDK API methods, including activate and track, for that session (i.e., the lifetime of the Optimizely SDK client instance). Variations are overwritten with each set method call. In order to clear the forced variations so that the normal bucketing flow can occur, simply pass null as the variation key parameter. A corresponding getter method, which passes in the experiment key and user ID, will allow you to get the variations that you forced a user into for a particular experiment.

Forced bucketing variations will take precedence over whitelisted variations, variations saved in the User Profile Service (if one exists), and the normal bucketed variation. Events sent to our results backend will proceed as normal when forced bucketing is enabled.


Notification Listeners

For basic debugging, plug in a logger to observe what the SDK is doing. To hook into these actions programmatically, you can use notification listeners to be notified when various Optimizely X SDK events occur.

Activate and track listeners can be added by accessing the notification listener instance variable off of your optimizely instance. Define your closure, function, lambda, or class (depending on the language) and add it to the notification center. The listeners are called on every subsequent event. Listeners are added per event type. So, there is a listener for activate and a listener for track. The example code shows how to add a listener, remove a listener, remove all listeners of a specific type (e.g. all activate listeners), or remove all listeners.


Variables

Live Variables are supported in the mobile SDKs.


SDK configuration

You can optionally provide a number of parameters to your Optimizely client to configure how the SDK behaves:

  • Event dispatcher: Configure how the SDK sends events to Optimizely
  • Logger: Configure how the SDK logs messages when certain events occur
  • Error handler: Configure how errors should be handled in your production code
  • User profile service: Configure what user state is persisted by the SDK
  • Datafile handler: Configure how and how often the datafile is updated

The SDK provides default implementations of event dispatching, logging, and error handling out of the box, but we encourage you to override the default behavior if you have different requirements in your production application. If you'd like to edit the default behavior, refer to the reference implementations in the SDK source code for examples.


Event dispatcher

You can optionally change how the SDK sends events to Optimizely by providing an event dispatcher method. You should provide your own event dispatcher if you have networking requirements that aren't met by our default dispatcher.

The event dispatching function takes an event object with three properties: httpVerb, url, and params, all of which are built out for you in EventBuilder. A POST request should be sent to url with params in the body (be sure to stringify it to JSON) and {content-type: 'application/json'} in the headers.

Logger

The logger logs information about your experiments to help you with debugging.

The log levels vary slightly between SDKs but generally fall in the following buckets:

Log Levels

CRITICAL Events that cause the app to crash are logged.
ERROR Events that prevent experiments from functioning correctly (e.g., invalid datafile in initialization, invalid experiment keys or event keys) are logged. The user can take action to correct.
WARNING Events that don't prevent experiments from functioning correctly, but can have unexpected outcomes (e.g. future API deprecation, logger or error handler are not set properly, nil values from getters) are logged.
DEBUG Any information related to errors that can help us debug the issue (e.g., experiment is not running, user is not in experiment, the bucket value generated by our helper method) are logged.
INFO Events of significance (e.g, activate started, activate succeeded, tracking started, tracking succeeded) are logged. This is helpful in showing the lifecycle of an API call


Error handler

You can provide your own custom error handler logic to standardize across your production environment. This error handler will be called in the following situations:

  • Unknown experiment key referenced
  • Unknown event key referenced

If the error handler isn’t overridden, a no-op error handler is used by default.


User Profile Service

Use a user profile service to persist information about your users and ensure variation assignments are sticky. For example, if you are working on a backend website, you can create an implementation that reads and saves user profiles from a Redis or memcached store.

Implementing a user profile service is optional and is only necessary if you want to keep variation assignments sticky even when experiment conditions are changed while it is running (e.g. audiences, attributes, variation pausing, traffic distribution). For more information on user profiles and how they used by the SDK, see this Optiverse article.


Datafile Handler

The default datafile handler is only available in mobile sdks.

IP anonymization

In some countries, you may be required to remove the last block of an IP address to protect the identity of your visitors. Optimizely allows you to easily remove the last block of your visitors' IP address before we store results data. This feature is currently available only for iOS, tvOS, and Android projects. To enable IP anonymization, see our Optiverse article.


Integrations

You can add Optimizely experiment information to third-party analytics platforms using Notification Listeners. Measure the impact of your experiments by segmenting your analytics reports using Optimizely experiment and variation keys or IDs.

Below you will find suggested implementations for some common analytics platforms including Amplitude, Google Analytics, Localytics, Mixpanel, and Segment. You can use them as presented or adapt them to meet your specific needs.

Notification Listeners also allow for the flexibility to implement an integration with a platform that is not listed here.

Experiment and Variation Identifiers

Experiment keys are unique within an Optimizely project, and variation keys are unique within an experiment. However, experiment and variation keys are not guaranteed to be universally unique since they are user generated. This may become a problem in your analytics reports if you use the same key in multiple Optimizely projects or rename your keys.

For human friendly strings when absolute uniqueness is not required, use keys. If you need to uniquely identify experiments and variations in a way that will not change when you rename keys, you can use the automatically generated IDs.


Amplitude


Google Analytics


Localytics


Mixpanel


Segment


Changelog