Smart Config UI
Step-by-Step Configuration Guide
Purpose of this Guide
This document provides a comprehensive, general explanation of how to use Metica’s Smart Config UI. It is intended for anyone responsible for configuring personalization use cases or experiments in Metica.
The guide walks through each part of the UI, what it’s for, what inputs are expected, and how to make good configuration decisions — regardless of the specific product use case.
What Are Smart Configs?
Smart Configs are the central mechanism in Metica for configuring and delivering dynamic content or experiments. They define:
What value(s) to return to the game or app (payload JSONs)
Who should receive those values (context fields - basically “segmentation” criteria like platform, country, behavioral metrics)
How the system should optimize outcomes (success metric, e.g., maximize revenue)
How personalization is driven (via contextual bandits or A/B logic) Smart Configs can be used for use cases like offer timing, ad frequency, reward size, pricing experiments, and more.
Glossary of Key Terms
Smart Config: A customizable, versioned configuration that determines what value to return to a user request.
Variant: A specific payload or experience option that a user might receive.
Contextual Bandit: A machine learning model that uses user properties (contexts) to personalize which variant is shown.
Success Metric: A key performance indicator Metica should optimize for (e.g., revenue, engagement).
Attribution Window: The time range after a user is assigned to a variant during which Metica tracks the success metric as a “reward” for the assignment decision.
Assignment Duration: The minimum length of time a user is locked into a variant before reassignment is allowed.
Attribution vs Assignment Duration
These control the timing of optimization and reassignment.
Example: if you want to see a 3-day revenue impact, attribution should be 3 days.
Assignment duration (stickiness) should typically match unless you want dynamic reassignment.
User Properties: Fields that describe user traits or behavior (e.g., platform, session count).
Calculated Attributes: Derived user properties that are generated within Metica (e.g., average session length over 3 days).
Holdout Group: A percentage of users who are not exposed to personalization or experimentation, used as a baseline for comparison.
Eligibility Conditions: Filters used to include or exclude users from a config.
How to Set Up Your First Smart Config
Welcome to the Metica Platform! This guide walks you through setting up your first Use Case within the Smart Config feature.
This guide will help you to:
Check you are sending events
Configure all success metrics required for your use case
Configure all user properties required for your use case
Create a smart config with a Bandit
Testing
Follow these simple steps to get started:
Step 1: Select Your Game
After logging into the Metica Platform:
Navigate to the top-left corner of the dashboard.
Click the dropdown to select your game from the list.
Once selected, the platform will automatically load the menu and data relevant to your chosen game.
Step 2: Access Event Monitoring
To begin monitoring events in your application:
Look at the left-hand navigation menu.
Go to Integration > Monitoring.
This section is where incoming event data is monitored.
Check whether all events you are sending are being received, and whether there are any errors.
If required fields are missing, coordinate with developers to ensure proper integration.
Example: You might confirm that an ad_impression or purchase_event is being received before using it in a metric.
Step 3: Data Dictionary Set Up
Here we want to configure the data so that it matches the requirements of your use cases.
Look at the left-hand navigation menu.
Go to Integration > Data Dictionary.
User Properties
These are used as contexts for personalization and decision-making.
Must be defined here before they can be referenced in a Smart Config.
Set the correct data type (string, number, boolean, timestamp).
Optional fields help avoid failures when properties are missing.
Recommended: platform, acquisition channel, device memory, early user behavior (e.g., impressions, spend)
Set-Up
Select ‘+New Property’
Choose the Display Name & Property Name
**Ensure the Property name has the same format as the user state event property name.**
Choose the Type of data e.g string, number or Boolean
Confirm
Calculated Attributes
Create dynamic user properties to use in context fields or conditions based on a live calculation applied to incoming player events.
This allows you to define metrics like:
The player’s average session time (over N days)
The player’s total purchases (last N days)
The player’s ads watched count (last N hours)
Often used when values must be calculated dynamically over a window of time.
Set-up
Select ‘+New Attribute’
Complete the Attribute Name and Reference fields
**Ensure the Reference name has the same format as the event.**
Choose your Source event
Confirm
Check the following are correct:
Reference Name & Source event are correct
Event Filter
Aggregation Function
Calculated attributes target window
Activate (Top right hand corner purple button)
Tip: These are useful when a player’s state can be derived from other existing events without having to duplicate that information.
Success Metrics
What is the Primary Success Metric for? The primary success metric is the real time success criteria used by a contextual bandit to inform the quality of its past decisions and influence its next decisions. A great success metric strikes the right balance between being immediately measurable (e.g. next 3 days) and aligned on a more holistic business goal (e.g. increasing revenue net of cannibalization effects).
How to decide which success metric is mandatory?
Revenue is recommended (90%+ of cases) as it's closest to what needs optimization.
How does the Secondary Metric work?
It’s used for monitoring and analysis but doesn’t influence the machine learning system. Only the primary success metric drives variant allocation decisions.
Guidance to Choosing Success Metrics
Pick metrics that reflect actual value: revenue, retention, engagement.
Avoid vanity metrics (e.g., number of clicks without downstream value).
Ensure the event is being sent and correctly structured.
Recommended: platform, acquisition channel, device memory, early user behavior (e.g., impressions, spend)
Set-Up
Go to the Success Metric Tab
Select the Purple button in the top right hand corner ‘+New metric’
Add the Success Metric Name
Choose the source event e.g adRevenue
Confirm
Add the Event filter and Property or Type
Confirm
Check the data entered - Source event
Add Event filter
Select the aggregation function (sum, count, average), and optionally a filter.
Activate (Top right hand corner purple button)
Examples:
Session starts = count of session_start
RV impression count = count of ad_impression where ad_type = rewarded
Can’t add what you need to the Success Metrics? Let your Customer Success Manager know and we can add it for you e.g
Total revenue = sum of iap_event + ad_revenue
Step 4: Creating a Smart Config
Navigate to the "Smart Configs" tab in the left hand Menu
In the Top right hand corner there will be a Purple button Click "+ New Config".
Choose your Test Name e.g Interstitial Frequency
Here you will find a blank canvas waiting for you to build your Smart Config logic.
A Smart Config requires:
A Condition
A Holdout
Variants
Default Payload
Here is an EXAMPLE of an Interstitial Frequency Smart Config Logic:
Eligibility Conditions
Conditions define a given player should qualify for the config.
Add a Condition:
In this example we are asking the Player age (account tenure in daysp) and have created two bandits based on the answer.
Click to Add your condition
Name your Condition
Add your Condition details
Go back to the Canvas page
EXAMPLE
Add a Bandit & Variant
After adding a condition you will see you have more logic now on the canvas.
Designing Variants
Make sure variants differ in ways that are expected to change behavior.
Use real parameters (e.g., wait times, reward multipliers) rather than abstract tags.
You can use:
Simple values (like variant = A)
Parameter sets (multiple fields per variant)
Metica does not auto-generate variants; you must define them.
Add your first variant option into the Payload.
Click on the Payload
Name your Payload
Add your first Variant in the form of a JSON
EXAMPLE
Now you need to Add your second Variant:
Here you will be asked what type of Experiment or Test you would like to run:
A/B Test
Contextual bandit (This Guide is based on selecting the Contextual bandit)
Confirm
Configure your Contextual Bandit
The information you add here refers back to the information you have added into your Data Dictionary.
Primary success metric (what to optimize for)
Attribution window (how long to track the success metric as a reward condition)
Assignment duration (minimum period for which a user stays in a variant)
User contexts (inputs to the model
Guidelines:
Attribution and assignment duration should usually match.
Use only user properties that are meaningful and not overly fragmented.
Avoid overfitting with too many or high-cardinality fields (e.g., hundreds of UA campaigns).
Here is an EXAMPLE of configuration for the Interstitial Frequency Experiment:
Continue adding your logic and Variants for each Bandit.
Add a Default Payload
Defaults are also returned to users who don’t meet eligibility conditions.
You can define what that default is:
An empty payload
One of the test variants
You can use:
Simple values (like variant = A)
Parameter sets (multiple fields per variant)
Add a Holdout
Select the Holdout button at the top of the page
Holdout Groups
Holdouts allow you to measure uplift from personalization.
Define a holdout group (e.g., 20% of users) who receive a default experience.
The UI will generate a SEED for you:
You can define what that default is:
An empty payload
One of the test variants
Activation & Versioning
All configs begin in "Draft".
You can edit drafts freely.
Once a config is activated, editing it creates a new version.
Versions are tracked independently to allow rollback or experimentation changes.
Once a config is live, the SDK can begin calling it and receiving variant assignments.
Last updated
Was this helpful?