Overview

This is the complete reference for Optimizely's SDKs.

Read on to learn how to install Optimizely's SDKs and run experiments in your code. Use the toggles at the upper-right to see implementations in other languages.


If you're just getting started, we recommend these resources:

  • Example usage: Overview of the core components of the SDK.
  • Getting started guide: Install the SDK and run an experiment in a few minutes.
  • FAQs: Answers for the most commonly asked questions about our SDKs.


Example usage

The code in the right panel illustrates the basic usage of the SDK to run an experiment in your code.

First, initialize an Optimizely client based on your Optimizely project's datafile.

To run an experiment, activate the experiment at the point you want to split traffic in your code. The activate function returns which variation the current user is assigned to. The activate function also sends an event to Optimizely to record that the current user has been exposed to the experiment. In this example, we've created an experiment, my_experiment, with two variations, control and treatment.

You'll also track events for your key conversion metrics. In this example, there is one conversion metric, my_conversion. The track function can be used to track events across multiple experiments and will be counted for each experiment only if activate has previously been called for the current user.

Note: The purpose of this example is familiarize you with the track method. However, the example simply shows you how to call the track method right after a bucketing decision without any conditions. Normally, you would create a condition for a conversion event and then call the track method when that condition is true.

The SDK requires you to provide your own unique user IDs for all of your activate and track calls. See User IDs for more details and best practices on what user IDs to provide.


Installation


Initialization

The SDK allows you to instantiate one or more Optimizely clients in your application. Each client represents a self-contained instance of Optimizely with a separate configuration. These clients can be used to activate experiments, evaluate features, and track events .

There are several other optional parameters for configuring the Optimizely client. You can learn more in the Customization section below.

Initialization methods


Create a client


Retrieve a client


Datafile bundling


Datafile polling


Datafile

The SDKs are required to read and parse a datafile at initialization.

The datafile is in JSON format and compactly represents all of the instructions needed to activate experiments and track events in your code without requiring any blocking network requests. For example, the datafile displayed on the right represents an example project from the Getting started guide and the basic usage above.

Unless you are building your own SDK, there shouldn't be any need to interact with the datafile directly. Our SDKs should provide all the interfaces you need to interact with Optimizely after parsing the datafile.

There are several options for keeping the datafile in sync. See Best practices for datafile management in Full Stack for more information.

Access datafile via CDN

Fetch the datafile for your Optimizely project from Optimizely's CDN. For example, if the ID of your project is 12345, you can access the file at the link below:

https://cdn.optimizely.com/json/12345.json

Access the datafile link in the Optimizely app.

Access datafile via REST API

Alternatively, you can access the datafile via Optimizely's authenticated REST API. With the REST API, updates are instantaneous; any changes you make in the product are immediately reflected in the API.

For example, if the ID of your project is 12345, you can access the datafile at:

https://www.optimizelyapis.com/experiment/v1/projects/12345.json

The datafile the API returns belongs to the Production environment. We do not support environment-specific datafiles.

As with other requests to the REST API, you must authenticate with an API token and use the preferred request library of your language.


Synchronization

To stay up-to-date with your experiment configuration in Optimizely, the SDK needs to periodically synchronize its local copy of the datafile.


Webhooks

If you are managing your datafile from a server-side application, we recommend configuring webhooks to maintain the most up-to-date version of the datafile. Your supplied endpoint will receive a POST request whenever the respective project is modified. Any time the datafile is updated, you must re-instantiate the Optimizely object in the SDK for the changes to take effect.

Set up a webhook, and add the URL the webhook service should ping.

The webhook payload structure is shown in the right panel. We currently support one event type, project.datafile_updated.


Securing webhooks

When you create a webhook, Optimizely generates a secret token that is used to create a hash signature of webhook payloads. Webhook requests include this signature in a header, X-Hub-Signature, that can be used to verify the request originated from Optimizely.

You can only view a webhook's secret token once immediately after creation. If you forget a webhook's secret token, you'll need to regenerate it in the Optimizely app.

The X-Hub-Signature header contains a SHA1 HMAC hexdigest of the webhook payload, using the webhook's secret token as the key and prefixed with sha1=. The way you verify this signature will vary depending on the language of your codebase, but we've provided a Flask reference implementation as an example.

We strongly recommend that you use a constant time string comparison function, such as Python's hmac.compare_digest or Rack's secure_compare, instead of the == operator when verifying webhook signatures. This prevents timing analysis attacks.


Feature Management

Feature Management is a set of new capabilities in Optimizely X Full Stack 2.0. Feature Management provides several new tools that help you integrate your feature development process with experimentation:

  • Create features in Optimizely that map to features you're developing in your application, then toggle your features on/off with feature flags.
  • Create feature configurations to parameterize your features with variables, then set your variable values using feature tests and feature rollouts.
  • Use feature tests to run A/B tests on your features. Feature tests have all the same capabilities as traditional Full Stack A/B tests.
  • Use feature rollouts to launch your features to a percentage of your traffic after you've found your optimal feature configuration.

Feature tests and feature rollouts are triggered using the isFeatureEnabled method. Create a feature in Optimizely, deploy the feature behind isFeatureEnabled, and turn the feature on/off for users by running feature tests and feature rollouts.

Note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely sends an impression so that information is recorded in your test results.
  • When a user is assigned to a feature rollout, Optimizely does not send an impression. This avoids creating extra network traffic.
  • If a feature test and a feature rollout are running on a feature, the test will be evaluated first.

For more information about using Feature Management, see the following articles:


Feature flags


Get feature configuration

A feature configuration is a set of variables associated with a feature that you can set as part of a feature test or feature rollout. Defining feature variables allows you to iterate on your feature in between code deploys. Run feature tests to determine the optimal combination of variable values, then set those values as your default feature configuration and launch using a rollout.

Use the following methods to get the value of a feature variable for a particular datatype:


Get enabled features


Experiments

Optimizely X Full Stack supports three types of experiments:

  • A/B tests help you determine which of two or more one-off changes produces the best results.
  • Feature tests help you iterate on new features.
  • Feature rollouts enable you to launch new features safely.

The following sections describe each experiment type in detail.


A/B tests

A/B tests are designed to answer one-off questions: which of two (or more) experiences performs best? Use the activate function to run an A/B test at any point in your code.

The activate function requires an experiment key and a user ID. The experiment key should match the experiment key you created when you set up the experiment in the Optimizely app. The user ID is a string that uniquely identifies the user in the exoeriment (read more in User IDs below).

The activate function returns which variation the current user is assigned to. If the A/B test is running and the user satisfies its audience conditions, the function returns a variation based on a deterministic murmur hash of the provided experiment key and user ID. This function also respects whitelisting and and user profiles.

Make sure that your code adequately handles the case when the A/B test is not activated (it should execute the default variation).

The activate function also sends an event to Optimizely to record that the current user has been exposed to the A/B test. You should call activate at the point you want to record an A/B test exposure to Optimizely. If you don't want to record an A/B test exposure, you can use an alternative function below to get the variation.


Feature tests

Feature tests allow you to iterate on the feature you develop. Use the flag to trigger a feature test at any point in your code.
Feature tests do not require using the activate method. If a feature test is active for a given user, does the same work that activate does for a traditional A/B test. It will trigger an impression, post to notification listeners, and return the correct values for your feature configuration.

The method requires a feature_key, user_id, and (optionally) attributes. The feature key is defined from the Features dashboard, as described in Feature tests: Experiment on features. The user ID string uniquely identifies the participant in the experiment (read more in User IDs below).

The method does not return the variation assigned to a user. Instead, it returns or as determined by whether the feature is active for this user. The objective of this design is to separate the implementation of a feature from the experiments run on a feature. Deploy your feature behind this flag, then allow your experiments and rollouts to determine whether the feature is live.

Although feature tests run using
do not require including an experiment key or variation keys in your application, those fields still exist in the Optimizely UI for organizational and naming convenience.

Note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely kicks off an impression so that information is recorded in the experiment results.
  • When a user is assigned to a feature rollout, Optimizely does not kick off an impression. This avoids creating extra network traffic that could affect users.
  • If a feature test and a feature rollout are running on a feature, the feature test will be evaluated first.


Feature rollouts

Feature rollouts allow you to launch a feature to a subset of users to mitigate risk or to launch the winning feature configurations you find through your feature tests. Use rollouts to expose a feature to a percentage of your traffic, one or more audiences, or a percentage of your targeted audiences. The rollout's traffic allocation determines the percentage of eligible users who see your feature.

Note the following differences between feature tests and feature rollouts:

  • When a user is assigned to a feature test, Optimizely kicks off an impression so that information is recorded in the test results.
  • When a user is assigned to a feature rollout, Optimizely does not kick off an impression. This avoids creating extra network traffic that could affect users.
  • If a feature test and a feature rollout are running on a feature, the test will be evaluated first.


User attributes

If you'd like to be able to segment your experiment data based on attributes of your users, include the optional attributes argument when you call activate or isFeatureEnabled. Optimizely will include these attributes in the recorded event data so you can segment them on the Optimizely results page.

Passing attributes will also allow you to target your experiments to a particular audience you've defined in Optimizely. If the provided experiment is targeted to an audience, Optimizely will evaluate whether the user falls in an audience that is associated with the experiment before bucketing.

An attribute is comprised of an ID, a key, and a segment ID. All of these are strings, so any values that are stored as attributes must be converted to strings.

For more information about managing audiences and attributes, see Create audiences and attributes in Full Stack projects.

Variation assignments

If you would like to retrieve the variation assignment for a given experiment and user without sending a network request to Optimizely, use the code shown on the right. This function has identical behavior to activate except that no event is dispatched to Optimizely.

You may want to use this function if you're calling activate in a different part of your application or in a different part of your stack and you want to fork code for your experiment in multiple places. It is still necessary to call activate at least once to register that the user was exposed to the experiment.


Exclusion groups

Use exclusion groups to keep your experiments mutually exclusive and eliminate interaction effects.

Experiments that are part of an exclusion group have the exact same interface in the SDK: you can call activate and track like any other experiment. The SDK will ensure that two experiment in the same group will never be activated for the same user.


Events

Track conversion events from your code with the track function. For more information about when and how to track events, see Best practices for tracking events in Full Stack.

The track function requires an event key and a user ID. The event key should match the event key you provided when creating events in a Full Stack project in the Optimizely app. The user ID should match the user ID provided in the activate or isFeatureEnabled functions.

The track function can be used to track events across multiple experiments and will be counted for each experiment only if activate has previously been called for the current user.

To enable segmentation of metrics on the Optimizely results page, pass the same user attributes you used in the activate call.

For offline event tracking and other advanced use cases, you can also use the Events API.

Note:

- The Optimizely results page will only count events that are tracked after activate has been called. If you do not see results on the Optimizely results page, make sure that you are calling activate before tracking conversion events.

- If you are tracking user attributes, you must pass the same dictionary used in the actiate call to this track call.


Event tags

Event tags are contextual metadata about conversion events that you track.

Use event tags to attach key/value data to events. For example, for a product purchase event, you may want to attach a product SKU, product category, order ID, and purchase amount. Event tags can be strings, integers, floating point numbers, or Boolean values.

You can include event tags with an optional argument in track as shown in the right panel.

Event tags are distinct from user attributes, which should be reserved for user-level targeting and segmentation. Event tags do not affect audiences or the Optimizely results page and do not need to be registered in the Optimizely app.

Event tags are accessible via raw data export in the event_features column. Include any event tags you need to reconcile your conversion event data with your data warehouse.

Reserved tags

The following tag keys are reserved and will be included in their corresponding fields in the Optimizely event API payload. They're bundled into event tags for your convenience. Use them if you want to benefit from specific reporting features such as revenue metrics or numeric metrics.

  • revenue: An integer value that is used to track the revenue metric for your experiments, aggregated across all conversion events. Revenue is recorded in cents; to record a revenue value of $64.32, use 6432. Add any event you want to track revenue for as a metric in your experiment. Use the overall revenue metric to aggregate multiple metrics tracking separate revenue events. The overall revenue event won't track any revenue unless other metrics in your experiment are tracking an increase or decrease in total revenue; it won't work on its own.

  • value: A floating point value that is used to track a custom value for your experiments.


Tracking with other SDKs

You can use any of our SDKs to track events, so you can run experiments that span multiple applications, services, or devices. All of our SDKs have the same bucketing and targeting behavior, so you'll see the same output from experiment activation and tracking as long as you are using the same datafile and user IDs.

For example, if you're running experiments on your server, you can activate experiments with our Python, Java, Ruby, C#, Node, or PHP SDKs, but track user actions client-side using our JavaScript, Objective-C or Android SDKs.

If you plan to use multiple SDKs for the same project, make sure that all SDKs share the same datafile and user IDs.


Bot filtering

NOTE: This capability is available in SDK starting version 2.1.0. It also requires you to have access to Feature Management. If your account does not have access to Feature Management, please contact Optimizely support.

Optimizely X Full Stack bot filtering allows you to filter events by sending User-Agents to Optimizely with your events. Optimizely will compare the User-Agents you send to a list of known bots and filter all User-Agents that are on the blacklist. Navigate to Settings > Advanced in the Optimizely app to turn bot filtering on or off.

Case: Client-side JavaScript events

When you track an event by calling the SDK’s activate, isFeatureEnabled, or track methods, client-side implementations of the JavaScript SDK automatically include the user’s User-Agent in the outbound request. If bot filtering is enabled for your project, Optimizely applies bot filtering automatically.

Case: All other Full Stack events

When you track events with an SDK from somewhere other than a web browser, you must pass the User-Agent to be filtered with your event. To do this, set the user’s User-Agent string to the value of the reserved attribute '$opt_user_agent' and specify the '$opt_user_agent' attribute when you call the SDK’s activate, isFeatureEnabled, or track methods, as shown in the code sample in the right panel. If bot filtering is enabled for your project and the User-Agent is passed in this way, Optimizely will apply bot filtering.

For more information about the bots and spiders captured by bot filtering, read Bot and spider filtering in Optimizely X.


User IDs

User IDs are used to uniquely identify the participants in your experiments. Supply any string you wish for user IDs, depending on your experiment design.

For example, if you're running experiments on anonymous users, you can use a first-party cookie or device ID to identify each participant. If you're running experiments on known users, you can use a universal user identifier (UUID) to identify each participant. If you're using UUIDs, you can run experiments that span multiple devices or applications and ensure that users have a consistent treatment.

User IDs don't necessarily need to correspond to individual users. If you're running experiments in a B2B SaaS application, you may want to pass account IDs to the SDK to ensure that every user in a given account has the same treatment.

Here are more tips for using user IDs:

  • Ensure user IDs are unique: User IDs must be unique among the population you are using for experiments. Optimizely will bucket users and provide experiment metrics based on the user ID that you provide.

  • Anonymize user IDs: The user IDs you provide will be sent to Optimizely servers exactly as you provide them. You are responsible for anonymizing any personally identifiable data such as email addresses in accordance with your company's policies.

  • Use IDs from third-party platforms: If you are measuring the impact of your experiments in a third-party analytics platform, we recommend leveraging the same user ID from that platform. This will help ensure data consistency and make it easier to reconcile events between systems.

  • Use one namespace per project: Optimizely generally assumes a single namespace of user IDs for each project. If you are using multiple different conventions for user IDs in a single project (e.g., anonymous visitor IDs for some experiments and UUIDs for others), Optimizely will be unable to enforce rules such as mutual exclusivity between experiments.

  • Use either logged-out or logged-in IDs: We do not currently provide a mechanism to alias logged-out IDs with logged-in IDs. If you are running experiments that span both logged-out and logged-in states (e.g., experiment on a signup funnel and track conversions after the user has logged in), you must persist logged-out IDs for the lifetime of the experiment.


Bucketing IDs ʙᴇᴛᴀ

Bucketing ID is a beta feature intended to support customers who want to assign variations with different identifier than they use to count visitors. For example, a company might want to assign variations by account ID while counting visitors by user ID. We're investigating the implications of bucketing IDs on results analysis, and we'd love to have your feedback! If you want to participate in this beta release, please open a ticket with our support team.

By default, Optimizely assigns users to variations (in other words, Optimizely "buckets" users) based on submitted user IDs. You can change this behavior by including a bucketing ID.

With a bucketing ID, you decouple user bucketing from user identification. Users who have the same bucketing ID are put into the same bucket and are exposed to the same variation.

Using a bucketing ID does not affect the user ID. Event data submissions will continue to include user IDs. With the exception of assigning users to specific variations, features that rely on user IDs behave the same regardless of the presence of a separate bucketing ID. If you do not pass a bucketing ID to the attributes parameter, users will be bucketed by user IDs, which is the default method.

Notes

  • The bucketing ID is not persisted.
  • Bucketing IDs are not compatible with Optimizely's feature rollouts.

QA and debugging

Several tools are available for QAing experiments and debugging SDK behavior. See our article on QA in Optimizely X Full Stack for more information.


Whitelists

Use whitelisting to force users into specific variations for QA purposes.

Whitelists are included in your datafile in the forcedVariations field. You don't need to do anything differently in the SDK; if you've set up a whitelist, experiment activation will force the variation output based on the whitelist you've provided. Whitelisting overrides audience targeting and traffic allocation. Whitelisting does not work if the experiment is not running.


Forced bucketing

The forced bucketing feature allows you to force a user into a variation by calling a method in the SDK. Forced bucketing is particularly useful for experimenting because it allows you to set the variation on the client in real time, eliminating the uncertainty and latency of a datafile download.

Forced bucketing is similar to whitelisting in that it allows you to force a user into a specific variation. However, what differentiates forced bucketing from whitelisting is that you set the variation in the SDK and not on the Optimizely app dashboard (therefore eliminating the dependency on a datafile download), and you are not limited to how many users can be forced into a variation like you are when whitelisting. It is important to note that the forcedVariations field in the datafile is only related to whitelisted variations and not to variations set by this API.

The example code demonstrates how to use the forced bucketing API. An experiment key, user ID, and variation key are passed into the set method. The variation set will be cached and used by all SDK API methods, including activate and track, for that session (i.e., the lifetime of the Optimizely SDK client instance). Variations are overwritten with each set method call. To clear the forced variations so that the normal bucketing flow can occur, pass null as the variation key parameter. A corresponding getter method, which passes in the experiment key and user ID, will allow you to get the variations that you forced a user into for a particular experiment.

Forced bucketing variations will take precedence over whitelisted variations, variations saved in the user profile service (if one exists), and the normal bucketed variation. Events sent to our results backend will proceed as normal when forced bucketing is enabled.


Environments

The Optimizely environments feature lets you view and QA your experiment code before you reveal it on your website. Whether or not you already use a formal deployment environment, you can create environments to ensure the quality of your experiment code in an independent setting before you show it to users.

Essentially, Optimizely environments allow a single experiment to be running in some contexts and paused in others. You can turn an experiment on and run it for your team so they can QA it, but leave it turned off for users. Environments enable you to do this without duplicating your code, Optimizely project, or experiments within your project.

Environments are currently available only for Optimizely X Full Stack projects.

Note: Optimizely currently supports SDK Key functionality for the Full Stack SDKs that provide out-of-the-box datafile management: Android, Android TV, iOS, and tvOS. To initialize a Full Stack 2.0 project in any of these SDKs, you use an SDK Key instead of a project ID. An SDK Key is a unique identifier that maps a specific datafile location to its corresponding environment. For more information on using an SDK Key, see the Initialization topic for the applicable SDK.

SDK Key support for the other SDKs is planned for a future release.

For more information and instructions on how to use Environments, see Use environments to QA your experiment code


Customization

You can optionally provide a number of parameters to your Optimizely client to configure how the SDK behaves:

  • Event dispatcher: Configure how the SDK sends events to Optimizely
  • Logger: Configure how the SDK logs messages when certain events occur
  • Error handler: Configure how errors should be handled in your production code
  • User profile service: Configure what user state is persisted by the SDK
  • Datafile handler: Configure how and how often the datafile is updated

The SDK provides default implementations of event dispatching, logging, and error handling out of the box, but we encourage you to override the default behavior if you have different requirements in your production application. If you'd like to edit the default behavior, refer to the reference implementations in the SDK source code for examples.


Event dispatcher

The Optimizely SDKs make HTTP requests for every impression or conversion that gets triggered. These HTTP requests can be synchronous or asynchronous by default, depending on which SDK language you use. The SDK allows you to override its event dispatching component to optimize for performance, such as providing an asynchronous dispatcher that works well with the environment the SDK is running on or batching the events to minimize the number of HTTP requests that are sent.

These SDKs ship with out-of-the-box asynchronous dispatchers:

  • Java
  • C#
  • JavaScript
  • Objective-C
  • Swift
  • Android

These SDKs ship with out-of-the-box synchronous dispatchers:

  • Python
  • Ruby
  • PHP

The event dispatching function takes an event object with three properties: httpVerb, url, and params, all of which are built out for you in EventBuilder. A POST request should be sent to url with params in the body (be sure to stringify it to JSON) and {content-type: 'application/json'} in the headers.

Logger

The logger logs information about your experiments to help you with debugging.

The log levels vary slightly among SDKs but are generally as follows:

Log Levels

CRITICAL Events that cause the app to crash are logged.
ERROR Events that prevent experiments from functioning correctly (e.g., invalid datafile in initialization and invalid experiment keys or event keys) are logged. The user can take action to correct.
WARNING Events that don't prevent experiments from functioning correctly, but can have unexpected outcomes (e.g., future API deprecation, logger or error handler are not set properly, and nil values from getters) are logged.
DEBUG Any information related to errors that can help us debug the issue (e.g., experiment is not running, user is not in experiment, and the bucket value generated by our helper method) are logged.
INFO Events of significance (e.g., activate started, activate succeeded, tracking started, and tracking succeeded) are logged. This is helpful in showing the lifecycle of an API call.


Error handler

You can provide your own custom error handler logic to standardize across your production environment. This error handler will be called in the following situations:

  • Unknown experiment key referenced
  • Unknown event key referenced

If the error handler is not overridden, a no-op error handler is used by default.


User profile service

Use a user profile service to persist information about your users and ensure variation assignments are sticky. For example, if you are working on a backend website, you can create an implementation that reads and saves user profiles from a Redis or memcached store.

Implementing a user profile service is optional and is only necessary if you want to keep variation assignments sticky even when experiment conditions are changed while it is running (e.g., audiences, attributes, variation pausing, and traffic distribution).


Datafile handler

The default datafile handler is only available in the Android SDK.

IP anonymization

In some countries, you may be required to remove the last block of an IP address to protect the identity of your users. Optimizely allows you to remove the last block of your users' IP addresses before we store results data. This feature is currently available only for iOS, tvOS, and Android projects. Read IP anonymization in Full Stack projects for more information.


Notification listeners

Notification listeners trigger a callback function of your choice when certain actions are triggered in the SDK. The most common use case is to build custom integrations that forward decision and conversion events on to other systems. For example, you can send a stream of all experiment activations to an internal data warehouse and join it with other data you have about your users.

Currently, we support two types of listeners:

  • ACTIVATE triggers a callback with the experiment, user ID, attributes, variation, and the impression event.
  • TRACK triggers a callback with the event key, user ID, attributes, and event tags.

Activate and track listeners can be added by accessing the notification listener instance variable off of your Optimizely instance. Define your closure, function, lambda, or class (depending on the language) and add it to the notification center. The listeners are called on every subsequent event. Listeners are added per event type, so there is a listener for activate and a listener for track. The example code shows how to add a listener, remove a listener, remove all listeners of a specific type (e.g., all activate listeners), and remove all listeners.


Integrations

You can add Optimizely experiment information to third-party analytics platforms using notification Listeners. Measure the impact of your experiments by segmenting your analytics reports using Optimizely experiment and variation keys or IDs.

Below, we suggest implementations for some common analytics platforms, including Amplitude, Google Analytics, Localytics, Mixpanel, and Segment. Use these suggestions as presented or adapt them to meet your specific needs.

Notification listeners also allow the flexibility to implement an integration with a platform that is not listed here.

Read more about integrations for Optimmizely X Full Stack

Experiment and variation identifiers

Experiment keys are unique within an Optimizely project, and variation keys are unique within an experiment. However, experiment keys and variation keys are not guaranteed to be universally unique because they are user-generated. This may become a problem in your analytics reports if you use the same key in multiple Optimizely projects or rename your keys.

For human-friendly strings when absolute uniqueness is not required, use keys. If you need to uniquely identify experiments and variations in a way that will not change when you rename keys, use the automatically generated IDs.


Amplitude


Google Analytics


Localytics


Mixpanel


New Relic


Segment


Notification listeners

Notification listeners trigger a callback function of your choice when certain actions are triggered in the SDK. The most common use case is to build custom integrations that forward decision and conversion events on to other systems. For example, you can send a stream of all experiment activations to an internal data warehouse and join it with other data you have about your users.

Currently, we support two types of listeners:

  • ACTIVATE triggers a callback with the experiment, user ID, attributes, variation, and the impression event
  • TRACK triggers a callback with the event key, user ID, attributes, and event tags

Activate and track listeners can be added by accessing the notification listener instance variable off of your optimizely instance. Define your closure, function, lambda, or class (depending on the language) and add it to the notification center. The listeners are called on every subsequent event. Listeners are added per event type. So, there is a listener for activate and a listener for track. The example code shows how to add a listener, remove a listener, remove all listeners of a specific type (e.g. all activate listeners), or remove all listeners.


Variables

Live Variables is a deprecated feature in the iOS and Android SDKs.


Changelog