Anyone familiar with the Game Theory, I have a quick question about the model of alternating bargaining with a discount factor (normally notation used for it is $\displaystyle \delta$).

Let's say we have two players and their payoffs are determined by utility functions u(x) and v(1-x), where x is the demand in one round of the game. There is a discount factor applied to the bargaining set (set of utility pairs), and, in my specific case, the discount factor means the probability of achieving an agreement (ie the utility of what the players can get after each round of rejection is reduced by multiplication with the discount factor $\displaystyle \delta$).

Now, say I need to calculate the subgame perfect Nash equilibrium for two rounds of the game (I demands x, offers (1-x) to II, II accepts or rejects, if rejects, she can make a counteroffer to I, if I rejects, both gets zero).

The utility functions are $\displaystyle u(x)=x, v(y)=1-(1-y)^2$ for I and II, respectively.

To make II indifferent in the first round between accepting or rejecting, I must offer the equivalent of what II can get in the second round, which, for me, is thediscounted utility$\displaystyle \delta{v(y)}=\delta(1-(1-y)^2)$. So, in the first round, I demands x, and what's left for II is 1-x such that the II's payoffs in the first and second round are the same:

$\displaystyle v(1-x)=\delta{v(y)}$ where y is the demand by II in the second round.

Since in the 2nd round she will demand the whole 'pie',y=1:

$\displaystyle \delta{v(y)}=1-(1-1)^2=\delta$ thus giving me equation

which I then solve for x.

$\displaystyle v(1-x)=\delta, 1-(1-(1-x))^2=\delta$

Is that the right reasoning? especially getting to the last equation. Does it mean that utilty(1)=1 always?.