Pi the transcendental number has been stuck in my mind so much lately and the pursuit of a perfect compensation plan, that I just can’t stop thinking about it. My hunch is that in the combination of Pi and the Golden ratio lays the math for growing a sustainable business model. Logic dictates that there must be a perfect formula found in nature, for growing a sustainable business enterprise to be robust and healthy and support many more well paying jobs.
Breakthroughs come from trying to mimic nature in ways that were previously not considered. Using Pi as the factor and Fibonacci Sequence for the revenue distribution model. Borrowing the Sunflower for inspiration and realizing all participants in the payment matrix are equal seeds, starting from the middle the seeds progress towards higher pay.
Pi the transcendental number
By German – Own work, Public Domain, Link
Seeking the wisdom to solve a complex question, with a simple answer. The question is how to create a sustainable business model based on a $9.99 monthly subscription, whereby the compensation from sale pays it forward to build the company, compensates the sales and marketing organization, then pay the cost of the service, which is better than any competitor.
When Pi is the compensation, distributed into a Golden ratio matrix payout to affiliates. Visualize the sunflower and imagine that the closer the seed gets to the outside edge, the higher the value of monthly compensation. Consider that all seeds are equal owners, now add a monthly service so beneficial to your life that you became a part owner by paying $9 per month, Pi of which is commission, distributed by the Golden ratio to the matrix that surrounds the new member.
I’m thinking out loud here and will return to continue working on this, like John Forbes Nash Jr and his ideas for Nash equilibrium
Solution concept
Selected equilibrium refinements in game theory. Arrows point from a refinement to the more general concept (i.e., ESS ⊂ {\displaystyle \subset }
Proper).
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called “solutions”, and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.
Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games.
Nash equilibria in a payoff matrix
There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell – then the cell represents a Nash equilibrium.
Player 2 Player 1 | Option A | Option B | Option C |
---|---|---|---|
Option A | 0, 0 | 25, 40 | 5, 10 |
Option B | 40, 25 | 0, 0 | 5, 15 |
Option C | 10, 5 | 15, 5 | 10, 10 |
We can apply this rule to a 3×3 matrix:
Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A) 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B) 25 is the maximum of the second column and 40 is the maximum of the first row. Same for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns.
This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria.
Stability
The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria.
A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold:
- the player who did not change has no better strategy in the new circumstance
- the player who did change is now playing with a strictly worse strategy.
If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed.
In the “driving game” example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities, then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%).
Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium.
The Nash equilibrium defines stability only in terms of unilateral deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition.^{[15]} Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members.^{[16]} However, the strong Nash concept is sometimes perceived as too “strong” in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium.
A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE)^{[15]} occurs when players cannot do better even if they are allowed to communicate and make “self-enforcing” agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE.^{[17]} Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the core.
Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria.
So needless to say, my ideas are all over the map on this “Beautiful Mind Business Model and Compensation Plan but I feel like I stepped closer and feel encouraged now to take more steps, on my quest to develop a business model based on the golden ratio. More in the future…. Meanwhile, it’s getting late, I need to stop listening to Psychedelic Dub/Reggae (for now) and get some sleep.
Photo credit on Pi is missing and will be here when I find it.
No comments yet.