Kullback–Leibler divergence (KLD) and NFT Economics
Ok, so you've launched your cool NFT game, that you'd been working on it for years. The platform is ready, the tokens are minted, the users are hyped and coming in droves.
Sustainable scaling the product is not easy - I mean just listen to Tony Robbins. And while the token economics have be drafted in the whitepaper, the users would have to actually behave the similarly to what you'd invisioned. For instance, suppose you have an RPG, where user acquire experience and level up. In order to level up, they need to earn in-game artifacts, which can also be bought in the marketplace (where you charge transaction fees - here comes your business model). After having some thought you have drafted the approximate path of users' level progress and even ran Monte Carlo simulation to get some sence of possible distribution.
Having finalised all the statistical models and simulations, you've come to the following conclusion - an average user will upgrade the free band of 240 experience to a paying band of 280 within 25 days in game; all supported by the following graph:
A bit about the graph and how the initial problem of user experience level-uping is being reduced to the process control loop.
What Is A Process Control Loop?
We’ll describe very briefly the elements of a very basic process control loop and then the data it generates. A simple single-input-single-output (SISO) feedback control loop consists of following:
Process Input
An outside variable that affects a process. In a control loop, you must be able to control and manipulate this variable. Originally it is something like the steam flow into a tank controlling the fluid temperature out of a tank. In our situation could be a number of initial tokens, specific meta-build of the character.
Process Output/Value (PV)
A characteristic of the process that affects the outside world. In a process control loop, this must be measurable and vary in a consistent way with the process input. In the tank temperature control example, the temperature of the fluid exiting the tank would be the process output. In the NFT world, an actual experience, gained by the character.
Setpoint (SP)
The desired value for the process output. In the tank example, this is the desired temperature of the fluid. In NFT example, the desired experience, gained by the character
Controller
The hardware and software, which compare the measured process output to the setpoint, and calculates if the process input needs to change and by how much. The controller then sends a signal (or incentive) to an actuator to make an adjustment to the process input, if necessary. In the tank example, this could be an actuator on a control valve on the steam line to the tank. In the NFT world, notification about a sale or special event to spur the user to level-up.
So back to our graph, things look great, the actual user experience (PV) moves closely to the predicted value (SP) and most importantly within the allowed margin of error.
All goes well, until it doesn't
The reality usually hits pretty fast and pretty hard. Within weeks you realise users don't level up to the extend the model have been projecting and also it takes them much longer to reach the desired level. You come back to the research board and compare the model to the real values:
The graph shows exactly what the product team has been complaining about - not fast nor high enough. Not only the predicted value (blue line) goes above the upper margin of the actual value (PV), but it even goes beyond the upper margin of the setpoint (SP).
If you cannot measure it, you cannot fix it
How does one measure the difference between two lines? But as there are many users with many upgade scenarios, let's better rephrase this: how does one measure the difference between a two sets of lines. But again, as the nominal error for the level-up scenario from 100 to 150 would probably be smaller than from 250 to 300, a better way would be to measure the statistical difference between these sets and not the actual values.
So, there needs to be a way to measure of the difference (or dissimilarity) between two sets of scenarios (or probability distributions). There is - it's called Kullback-Leibler divergence
Kullback–Leibler divergence (KLD)
A more detailed definition is would be the measure of the relative difference between two probability distributions for a given random variable or set of events. KL divergence is also known as Relative Entropy. It can be calculated by the following formula:
\(D_{KL} = (P||Q)=\sum_{x\in \mathcal X} {P(x)log(\frac{P(x)}{Q(x)})} \)
But before we actually use it, we still need to measure the difference between two scenarios. This can be achieved by, for instance, taking a weighted sum of duraiton error, calculted based on the time the predicted scenario spent outside the margins bands, with deviation error, calculated based on how far the predicted scenario deviated from the original one. For normalisation weights \( (w_1, w_2) \in (0,1] \):
\(E_{overall} = E_{duration}*w_{1}+E_{deviation}*(1-w_{1}) \)
\(E_{duration} = 1 -\frac{time \ outside}{total \ scenario \ time} \)
\(E_{deviation} = \frac{margin \ area}{margin \ area + w_{2}*deviated \ area} \)
Implmentation notes:
- Areas are calculated by integrating the datasets, which using pandas is as simple as a +/- operation.
- The divergence is nothing by an entropy of given probability values and is easily achieved with the scipy library using the entropy method.
Conclusion
While writing secure smart contracts is definately important for the overall development on chain, understanding the token economics and being able to accurately measure the dynamics is essential for scalling a succesful blockchain project. And with a bit of math, one can now measure the error of the initial assumptions, adjust them and more importantly update the NFT economics to reflect the desired business case.
Comments
Post a Comment