most of the time is taken calculating the contour graph right at the end; takes about an hour to run without cache for n=10

To cite the hyper2 package in publications, please use Hankin (2017).

Objects of class are a generalization of objects that allow the brackets to contain weighted probabilities. Likelihood functions are defined on non-negative \(p_1,\ldots, p_n\) subject to the unit-sum constraint \(\sum p_i=1\). Given known weights \(w^i_j\) with \(1\leq i\leq j\) we have

\[\mathcal{L}\left(p_1,\ldots, p_n\right)=\prod_j\left(\sum_{i=1}^n w^i_jp_i\right)^{n_j}.\]

As a motivating example, suppose two teams (players) with Bradley-Terry strengths \(p_1,p_2\) play football where we quantify the home-ground advantage with a term \(\lambda\). If \(p_1\) plays at home \(a+b\) times with \(a\) wins and \(b\) losses, and plays away [so \(p_2\) is at home] \(c+d\) times with \(c\) wins and \(d\) losses, then a sensible likelihood function might be

\[\mathcal{L}(p_1,p_2,\lambda;A,B,C,D)= \underbrace{ \overbrace{\left(\frac{\lambda p_1}{\lambda p_1 + p_2}\right)^{A}}^{\mbox{$A$ home wins for $p_1$}}\quad \overbrace{\left(\frac{p_2 }{\lambda p_1 + p_2}\right)^{B}}^{\mbox{$B$ away wins for $p_2$}} }_\mbox{$p_1$ plays at home to $p_2$ $A+B$ times}\quad \underbrace{ \overbrace{\left(\frac{\lambda p_2}{p_1 + \lambda p_2}\right)^{C}}^{\mbox{$C$ home wins for $p_2$}}\quad \overbrace{\left(\frac{ p_1}{p_1 + \lambda p_2}\right)^{D}}^{\mbox{$D$ away wins for $p_1$}} }_\mbox{$p_2$ plays at home to $p_1$ $C+D$ times} \]

where we understand that \(p_1+p_2=1\), \(p_1,p_2,\lambda\geqslant 0\). Elementary techniques allow us to identify a maximum and we find

\[ \hat{p_1}=\frac{\sqrt{AD}}{\sqrt{AD}+\sqrt{BC}}\qquad \hat{p_2}=\frac{\sqrt{BC}}{\sqrt{AD}+\sqrt{BC}}\qquad \hat{\lambda}=\sqrt{\frac{AC}{BD}} \]

Note that the evaluate is unchanged if \(A,B,C,D\) are all multiplied by a positive constant.

1 Aside: hyper2 analysis of the same scoreline

Now the likelihood function would be

\[\mathcal{L}(p_1,p_2,\lambda;A,B,C,D)= \underbrace{ \overbrace{\left(\frac{p_1+M}{p_1 + p_2+M}\right)^{A}}^{\mbox{$A$ home wins for $p_1$}}\quad \overbrace{\left(\frac{p_2 }{p_1 + p_2+M}\right)^{B}}^{\mbox{$B$ away wins for $p_2$}} }_\mbox{$p_1$ plays at home to $p_2$ $A+B$ times}\quad \underbrace{ \overbrace{\left(\frac{p_2+M}{p_1 + p_2+M}\right)^{C}}^{\mbox{$C$ home wins for $p_2$}}\quad \overbrace{\left(\frac{p_1 }{p_1 + p_2+M}\right)^{D}}^{\mbox{$D$ away wins for $p_1$}} }_\mbox{$p_2$ plays at home to $p_1$ $C+D$ times} \]

The maximum would be

\[ \hat{p_1}=\frac{D}{C+D}\qquad \hat{p_2}=\frac{B}{A+B}\qquad \hat{M}=\frac{AC-BD}{(A+B)(C+D)} \]

although it is not clear to me what the constrained optimal point [that is, requiring \(M\geq 0\)] is.

1.1 Numerical analysis

A <- 9
B <- 5
C <- 7
D <- 3

(p1hat <- sqrt(A*D)/(sqrt(A*D) + sqrt(B*C)))
## [1] 0.46761
(p2hat <- sqrt(B*C)/(sqrt(A*D) + sqrt(B*C)))
## [1] 0.53239
(lam_hat <- sqrt((A*C)/(B*D)))
## [1] 2.0494

\[ \hat{p_1}=\frac{\sqrt{21}}{\sqrt{21} + \sqrt{35}}\simeq 0.47\qquad \hat{p_2}=\frac{\sqrt{63}}{\sqrt{21} + \sqrt{35}}\simeq 0.53\qquad \hat{\lambda}=\sqrt{\frac{AC}{BD}}=2.05. \]

Moving to the monster formulation we have

\[ \hat{p_1}=\frac{3}{7+3} = 0.3\qquad \hat{p_2}=\frac{5}{5+9}\simeq 0.36\qquad \hat{M}=\frac{63-21}{14\cdot 10} = 0.3 \]

Keeping \(A=9,B=5, C=7,D=3\), and \(\lambda=2.05\) [assumed for the moment to be known, estimation techniques are discussed later], we can represent this information in a standard format:

library("hyper2",quietly=TRUE)
M <- matrix(c(NA, A+B*1i ,C+D*1i, NA),2,2,byrow=TRUE)
teams <- c("p1","p2")
dimnames(M) <- list("@home" = teams,"@away"=teams)
dimnames(M) <- list("@home" = teams,"@away"=teams)
print(M)
##      @away
## @home   p1   p2
##    p1   NA 9+5i
##    p2 7+3i   NA

Above, object M has real parts being home wins and imaginary parts being away wins. Appropriate package idiom to specify a likelihood function might be to use bespoke function home_away3():

(H <- home_away3(M,lambda=2.05))
## log( (p1=1)^3 * (p1=1, p2=2.05)^-10 * (p1=2.05)^9 * (p1=2.05, p2=1)^-14
## * (p2=1)^5 * (p2=2.05)^7)

Keeping \(A=9,B=5, C=7,D=3\), and \(\lambda=2.05\) [assumed for the moment to be known, estimation techniques are discussed later], we may estimate \(p_1\) and \(p_2\) using maximum likelihood, maxp() in package idiom:

H
## log( (p1=1)^3 * (p1=1, p2=2.05)^-10 * (p1=2.05)^9 * (p1=2.05, p2=1)^-14
## * (p2=1)^5 * (p2=2.05)^7)
maxp(H)
##      p1      p2 
## 0.46759 0.53241

Further, we can test whether the players are in fact of equal strength:

equalp.test(H)
## 
##  Constrained support maximization
## 
## data:  H
## null hypothesis: p1 = p2
## null estimate:
##  p1  p2 
## 0.5 0.5 
## (argmax, constrained optimization)
## Support for null:  -15.278 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##      p1      p2 
## 0.46759 0.53241 
## (argmax, free optimization)
## Support for alternative:  -15.233 + K
## 
## degrees of freedom: 1
## support difference = 0.044726
## p-value: 0.76487

Showing that we fail to reject the null that \(p_1=p_2\), at least with this value of \(\lambda\). Observe that a naive analysis would have \(p_1\) winning \(w=A+C=16\) games and losing \(l=B+D=8\) games, from which we would obtain \(\hat{p_1}=16/24\); a likelihood ratio for \(H_0\colon p_1=0.5\) would be

\[\frac { {{w+l}\choose {w\, l}}\left(\frac{w}{w+l}\right)^w\left(\frac{l}{w+l}\right)^l }{ {{w+l}\choose {w\, l}}\left(\frac{1}{2}\right)^w\left(\frac{1}{2}\right)^l }=\frac{2^{w+l}w^wl^l}{(w+l)^{w+l}}\simeq 1.94, \]

not significant.

Now, how to estimate \(\lambda\)?

f <- function(lambda){maxp(home_away3(M,lambda=lambda),give=TRUE)$value}
f(2)
## [1] -15.235
f(2.1)
## [1] -15.235
lam <- seq(from=1,to=10,len=17)
Supp <- sapply(lam,f)
plot(log(lam),Supp-max(Supp),type="b")

Or even

op <- optimize(f,interval=c(2,3),maximum=TRUE)
op
## $maximum
## [1] 2.0494
## 
## $objective
## [1] -15.233

We can proceed in two ways. Firstly we can use \(\hat{\lambda}\) directly:

equalp.test(home_away3(M,lambda=op$maximum))
## 
##  Constrained support maximization
## 
## data:  home_away3(M, lambda = op$maximum)
## null hypothesis: p1 = p2
## null estimate:
##  p1  p2 
## 0.5 0.5 
## (argmax, constrained optimization)
## Support for null:  -15.278 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##      p1      p2 
## 0.46761 0.53239 
## (argmax, free optimization)
## Support for alternative:  -15.233 + K
## 
## degrees of freedom: 1
## support difference = 0.044688
## p-value: 0.76497

The flaw in this analysis is that it is conditional on the estimated value of \(\lambda\) [which is not too bad, IMO]. Secondly we can perform a fully 2D analysis, which allows for the fact that the evaluate might have a different value of \(\lambda\) from the \(\hat{\lambda}\):

probs <- seq(from=0.1,to=0.8,len=2)
jj <- as.matrix(expand.grid(lam,probs))
S <- rep(NA,nrow(jj))
for(i in seq_len(nrow(jj))){
    lambda <- jj[i,1]
    p1 <- jj[i,2]
    p <- c(p1,1-p1)
    names(p) <- c("p1","p2")
   S[i] <- loglik(p=p,H=home_away3(M,lambda=lambda),log=TRUE)
}
S <- matrix(S,length(lam),length(probs))
contour(lam,probs,S,levels=seq(from=-20,to=-10,by=0.5),xlab="lambda",ylab="p1")
abline(h=0.5)

2 Ternary representation

library(Ternary)

par(mar = rep(0.2, 4))
TernaryPlot(alab = "p1", blab = "p2", clab = "M")

f <- function(p1,p2,M) {
pmax(-20,log(M+p1)*A + log(p2)*B - log(M+p1+p2)*(A+B+C+D) + log(p2+M)*C + log(p1)*D )
}
FunctionToContour <- function(a,b,c){f(p1=a, p2=b, M=c)}

# Compute and plot colour tiles
values <- TernaryPointValues(FunctionToContour, resolution = 24L)
ColourTernary(values)

# Add contour lines
TernaryContour(FunctionToContour, resolution = 36L)
## Warning in log(p2): NaNs produced

A slightly more complicated example:

home_games_won <- matrix(c(
    NA, 16, 12, 11,
    19, NA, 19, 16,
    17, 12, NA, 11,
    11, 12, 12, NA),
    nrow=4,ncol=4,byrow=TRUE)

away_games_won <- matrix(c(
    NA, 05, 02, 02,
     9, NA, 10, 02,
     3, 04, NA, 07,
     8, 06, 04, NA),
    nrow=4,ncol=4,byrow=TRUE)

teams <- LETTERS[1:4]
dimnames(home_games_won) <- list("@home" = teams,"@away"=teams)
dimnames(away_games_won) <- list("@home" = teams,"@away"=teams)

home_games_won
##      @away
## @home  A  B  C  D
##     A NA 16 12 11
##     B 19 NA 19 16
##     C 17 12 NA 11
##     D 11 12 12 NA
away_games_won
##      @away
## @home  A  B  C  D
##     A NA  5  2  2
##     B  9 NA 10  2
##     C  3  4 NA  7
##     D  8  6  4 NA

Thus home_games_won[1,2] == 16 means A played at home against B and won 16 times; home_games_won[2,1] == 19 means B played at home against A and won 19 times. Also, away_games_won[1,2] == 5 means A played away against B and won 5 times, and away_games_won[2,1] == 2 means B played away against A and won 2 times. Alternatively, A played B 16+19+5+2=45 times; A played at home 16+2=18 times and won 16 times and lost 2 times; and B played at home 19+5=24 times and won 19 times and lost 5 times. Alternatively we may use complex numbers to represent the same dataset:

home_games_won + 1i*away_games_won
##      @away
## @home     A     B      C     D
##     A    NA 16+5i 12+ 2i 11+2i
##     B 19+9i    NA 19+10i 16+2i
##     C 17+3i 12+4i     NA 11+7i
##     D 11+8i 12+6i 12+ 4i    NA

We will create a hyper2 object with the teams and home ground advantage term:

H <- home_away(home_games_won,away_games_won)
H
## log(A^20 * (A + B + home)^-49 * (A + C + home)^-34 * (A + D + home)^-32
## * (A + home)^39 * B^15 * (B + C + home)^-45 * (B + D + home)^-36 * (B +
## home)^54 * C^16 * (C + D + home)^-34 * (C + home)^40 * D^11 * (D +
## home)^35)
options("use_alabama" = FALSE) # to avoid the stupid wmmin bug
specificp.gt.test(H,"home",0)
## 
##  Constrained support maximization
## 
## data:  H
## null hypothesis: sum p_i=1, home <= 0 (notional)
## null estimate:
##          A          B          C          D       home 
## 2.6195e-01 2.7377e-01 2.4778e-01 2.1649e-01 9.9971e-06 
## (argmax, constrained optimization)
## Support for null:  -158.82 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##       A       B       C       D    home 
## 0.21323 0.17517 0.17780 0.13176 0.30203 
## (argmax, free optimization)
## Support for alternative:  -133.22 + K
## 
## degrees of freedom: 1
## support difference = 25.598
## p-value: 8.3607e-13 (one-sided)
options("use_alabama" = TRUE) # to avoid the stupid wmmin bug

We see strong evidence to support the contention that home advantage is real. Further, we may test the hypothesis that all the teams have the same strength, after accounting for the home team advantage:

samep.test(H,teams)
## Warning in optim(theta.old, fun, gradient, control = control, method = method, : one-dimensional optimization by Nelder-Mead is unreliable:
## use "Brent" or optimize() directly
## Warning in optim(theta.old, fun, gradient, control = control, method = method, : one-dimensional optimization by Nelder-Mead is unreliable:
## use "Brent" or optimize() directly
## 
##  Constrained support maximization
## 
## data:  H
## null hypothesis: A = B = C = D
## null estimate:
##       A       B       C       D    home 
## 0.17517 0.17517 0.17517 0.17517 0.29930 
## (argmax, constrained optimization)
## Support for null:  -134.05 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##       A       B       C       D    home 
## 0.21323 0.17517 0.17780 0.13177 0.30203 
## (argmax, free optimization)
## Support for alternative:  -133.22 + K
## 
## degrees of freedom: 3
## support difference = 0.83247
## p-value: 0.64476

Above, we see no evidence for a difference in team strengths. Visually:

(mt <- maxp(H))
##       A       B       C       D    home 
## 0.21323 0.17517 0.17780 0.13177 0.30203

A profile likelihood diagram:

home_strength <- seq(from=0.15,to=0.45,length=13)
plot(home_strength,profsupp(H,"home",home_strength),type="b",pch=16,
     xlab="home strength",ylab="support",main="profile likelihood for home advantage")
abline(h=c(0,-2))

The figure give a support interval between about 0.3 and 0.5 for home ground advantage.

3 Example in Agresti

Agresti (2002) (p438) considers a dataset of seven baseball teams who play one another repeatedly. Each game is played at the home ground of one team, the other playing away. The dataset is as follows:

baseball_table <- matrix(c(
NA,   4+3i, 4+2i, 4+3i, 6+1i, 4+2i, 6+0i,
3+3i, NA  , 4+2i, 4+3i, 6+0i, 6+1i, 4+3i,
2+5i, 4+3i, NA  , 2+4i, 4+3i, 4+2i, 6+0i,
3+3i, 5+1i, 2+5i, NA  , 4+3i, 4+2i, 6+1i,
5+1i, 2+5i, 3+3i, 4+2i, NA  , 5+2i, 6+0i,
2+5i, 3+3i, 3+4i, 4+3i, 4+2i, NA  , 2+4i,
2+5i, 1+5i, 1+6i, 2+4i, 1+6i, 3+4i, NA),7,7)
teams <- c("Milwaukee", "Detroit", "Toronto", "New York", "Boston","Cleveland","Baltimore")
rownames(baseball_table) <- teams
colnames(baseball_table) <- teams
M <- baseball_table # saves typing
M # for convenience
##           Milwaukee Detroit Toronto New York Boston Cleveland Baltimore
## Milwaukee        NA    3+3i    2+5i     3+3i   5+1i      2+5i      2+5i
## Detroit        4+3i      NA    4+3i     5+1i   2+5i      3+3i      1+5i
## Toronto        4+2i    4+2i      NA     2+5i   3+3i      3+4i      1+6i
## New York       4+3i    4+3i    2+4i       NA   4+2i      4+3i      2+4i
## Boston         6+1i    6+0i    4+3i     4+3i     NA      4+2i      1+6i
## Cleveland      4+2i    6+1i    4+2i     4+2i   5+2i        NA      3+4i
## Baltimore      6+0i    4+3i    6+0i     6+1i   6+0i      2+4i        NA

Above, we represent home team wins with the real part of the matrix, the imaginary component being away wins. Now process it:

baseball <- home_away(M)
baseball
## log(Baltimore^30 * (Baltimore + Boston + home)^-13 * (Baltimore +
## Cleveland + home)^-13 * (Baltimore + Detroit + home)^-13 * (Baltimore +
## Milwaukee + home)^-13 * (Baltimore + New York + home)^-13 * (Baltimore
## + Toronto + home)^-13 * (Baltimore + home)^30 * Boston^13 * (Boston +
## Cleveland + home)^-13 * (Boston + Detroit + home)^-13 * (Boston +
## Milwaukee + home)^-13 * (Boston + New York + home)^-13 * (Boston +
## Toronto + home)^-13 * (Boston + home)^25 * Cleveland^21 * (Cleveland +
## Detroit + home)^-13 * (Cleveland + Milwaukee + home)^-13 * (Cleveland +
## New York + home)^-13 * (Cleveland + Toronto + home)^-13 * (Cleveland +
## home)^26 * Detroit^12 * (Detroit + Milwaukee + home)^-13 * (Detroit +
## New York + home)^-13 * (Detroit + Toronto + home)^-13 * (Detroit +
## home)^19 * Milwaukee^11 * (Milwaukee + New York + home)^-13 *
## (Milwaukee + Toronto + home)^-13 * (Milwaukee + home)^17 * New York^15
## * (New York + Toronto + home)^-13 * (New York + home)^20 * Toronto^17 *
## (Toronto + home)^17)

Maximum likelihood:

baseball_maxp <- maxp(baseball)
baseball_maxp
## Milwaukee   Detroit   Toronto  New York    Boston Cleveland Baltimore      home 
##  0.061273  0.072493  0.091773  0.092729  0.103165  0.174919  0.368182  0.035464

Visually:

dotchart(baseball_maxp,pch=16)

pie(baseball_maxp)

Above, we see that the teams are all of roughly equal strength, and the home advantage is small. However, we may easily assess the null that the home strength is zero:

specificp.gt.test(baseball,"home",0)
## 
##  Constrained support maximization
## 
## data:  baseball
## null hypothesis: sum p_i=1, home <= 0 (notional)
## null estimate:
## Milwaukee   Detroit   Toronto  New York    Boston Cleveland Baltimore      home 
##   0.14241   0.14317   0.14275   0.14263   0.14317   0.14285   0.14301   0.00001 
## (argmax, constrained optimization)
## Support for null:  -189.18 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
## Milwaukee   Detroit   Toronto  New York    Boston Cleveland Baltimore      home 
##  0.061273  0.072493  0.091773  0.092729  0.103165  0.174919  0.368182  0.035464 
## (argmax, free optimization)
## Support for alternative:  -169.4 + K
## 
## degrees of freedom: 1
## support difference = 19.778
## p-value: 3.1863e-10 (one-sided)
home_strength <- seq(from=0.005,to=0.1,length=20)
plot(home_strength,profsupp(baseball,"home",home_strength),type="b",pch=16,
     xlab="home strength",ylab="support",main="profile likelihood for home advantage")
abline(h=c(0,-2))

Further, we may test the hypothesis that the baseball teams have the same strength:

samep.test(baseball,teams)
## Warning in optim(theta.old, fun, gradient, control = control, method = method, : one-dimensional optimization by Nelder-Mead is unreliable:
## use "Brent" or optimize() directly
## Warning in optim(theta.old, fun, gradient, control = control, method = method, : one-dimensional optimization by Nelder-Mead is unreliable:
## use "Brent" or optimize() directly
## 
##  Constrained support maximization
## 
## data:  baseball
## null hypothesis: Milwaukee = Detroit = Toronto = New York = Boston = Cleveland = Baltimore
## null estimate:
## Milwaukee   Detroit   Toronto  New York    Boston Cleveland Baltimore      home 
##  0.137091  0.137091  0.137091  0.137091  0.137091  0.137091  0.137091  0.040363 
## (argmax, constrained optimization)
## Support for null:  -186.98 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
## Milwaukee   Detroit   Toronto  New York    Boston Cleveland Baltimore      home 
##  0.061273  0.072493  0.091773  0.092729  0.103165  0.174919  0.368182  0.035464 
## (argmax, free optimization)
## Support for alternative:  -169.4 + K
## 
## degrees of freedom: 6
## support difference = 17.577
## p-value: 4.0244e-06

Above, we see no evidence for the teams having differential strengths.

As an interesting aside, suppose we have some additional data from games in which (for some reason), the home advantage was not operating.

M2 <- matrix(c(0,1,2,0),2,2)
dimnames(M2) <- list(winner=teams[1:2],loser=teams[1:2])
M2
##            loser
## winner      Milwaukee Detroit
##   Milwaukee         0       2
##   Detroit           1       0

Thus we have additional data from three games in which Milwaukee won twice and lost once. This additional information may easily be incorporated into our likelihood function, at least on the assumption of independence:

baseball <- baseball + pairwise(M2)

We may assess the difference that the new information makes:

ordertransplot(baseball_maxp[1:7],maxp(baseball)[1:7],xlab="old",ylab="new",plotlims=c(0.01,0.16))

Above, we see very little difference in the two evaluates, except for Detroit and Milwaukee, as expected.

3.0.1 Some further thoughts on the home ground monster and Agresti’s analysis.

Considering the simplest case of two teams, a and b, with results:

  • \(a\) at home, \(b\) away, \(a,b\) wins with probability \(\frac{\lambda a,b}{\lambda a+b}\) (Agresti), or \(\frac{a+H,b}{a+b+H}\) (me).
  • \(a\) away, \(b\) at home, \(a,b\) wins with probability \(\frac{a,\lambda b}{\lambda a+b}\) (Agresti), or \(\frac{a,b+H}{a+b+H}\) (me).

Matching the probabilities for \(a\) at home gives us \(H=a(\lambda-1)\) and matching the probabilities for \(a\) away gives \(H=b(\lambda-1)\). These cannot both be true unless \(a=b\) or \(\lambda=1\). But we can, in principle, test which one is a better model.

4 hyper3 idiom

We may apply hyper3 idiom to the dataset, using the likelihood function of Davidson and Beaver (1977). First a formal test:

baseball_table
##           Milwaukee Detroit Toronto New York Boston Cleveland Baltimore
## Milwaukee        NA    3+3i    2+5i     3+3i   5+1i      2+5i      2+5i
## Detroit        4+3i      NA    4+3i     5+1i   2+5i      3+3i      1+5i
## Toronto        4+2i    4+2i      NA     2+5i   3+3i      3+4i      1+6i
## New York       4+3i    4+3i    2+4i       NA   4+2i      4+3i      2+4i
## Boston         6+1i    6+0i    4+3i     4+3i     NA      4+2i      1+6i
## Cleveland      4+2i    6+1i    4+2i     4+2i   5+2i        NA      3+4i
## Baltimore      6+0i    4+3i    6+0i     6+1i   6+0i      2+4i        NA
f <- function(l){ maxp(home_away3(baseball_table,l),give=TRUE,n=1)$likes}
o <- optimize(f,c(0.8,1.6),maximum=TRUE)
o
## $maximum
## [1] 1.3529
## 
## $objective
## [1] -169.54
lambda_baseball <- o$maximum
maxL <- f(lambda_baseball)
W <- maxL - f(1)    # f(1) being the likelihood of the null
W
## [1] 2.7053
pchisq(2*W,df=1,lower.tail=FALSE)
## [1] 0.020015

The pvalue is significant. Now plot a profile likelihood (support) curve:

jj1  <- seq(from=log(lambda_baseball),to=log(1.9),len=7)
jj2  <- seq(from=log(lambda_baseball),to=log(0.8),len=7)
l <- sort(unique(exp(c(jj1,jj2))))
lam  <- sort(c(l,lambda_baseball))
L <- sapply(lam,f) - maxL
plot(log(lam),L,type="b")
abline(h=c(0,-2))
abline(v=0)

plot(lam,L,type="b")
abline(h=c(0,-2))

4.0.1 Analysis of Work by Davidson and Beaver

DB <- # Davidson and Beaver
matrix(c(
    NA, 32+18i, 36+14i, 47+03i, 47+03i,
14+36i,     NA, 34+16i, 43+07i, 46+04i,
06+44i, 14+36i,     NA, 34+16i, 40+10i,
02+48i, 07+43i, 12+38i,     NA, 28+22i,
01+49i, 02+48i, 05+45i, 10+40i,     NA
),5,5,byrow=TRUE
)
jj <- c("90g","95g","100g","105g","110g")
dimnames(DB) <- list(`@home` = jj, `@away` = jj)
DB
##       @away
## @home     90g    95g   100g   105g   110g
##   90g      NA 32+18i 36+14i 47+ 3i 47+ 3i
##   95g  14+36i     NA 34+16i 43+ 7i 46+ 4i
##   100g  6+44i 14+36i     NA 34+16i 40+10i
##   105g  2+48i  7+43i 12+38i     NA 28+22i
##   110g  1+49i  2+48i  5+45i 10+40i     NA
f <- function(l){ maxp(home_away3(DB,l),give=TRUE,n=1)$likes}
lam <- seq(from=0.6,to=1,len=19)
date()
## [1] "Tue Apr 21 08:38:23 2026"
loglike <- sapply(lam,f)
date()
## [1] "Tue Apr 21 08:39:44 2026"
plot(lam,loglike-max(loglike),type='b')
abline(h=c(0,-2))
abline(v=1)

5 Creating a home/away results table from scratch

Function home_away_table() takes a data frame of results and produces a table amenable to analysis by home_away() or home_away3().

a <- read.table("celtic_rangers_livingston_falkirk.txt", header=TRUE)
head(a)
##     hometeam   awayteam homescore awayscore
## 1    Rangers Livingston         3         0
## 2 Livingston    Rangers         0         2
## 3    Rangers Livingston         4         0
## 4 Livingston    Rangers         0         3
## 5    Rangers Livingston         1         1
## 6 Livingston    Rangers         1         2
teams <- c("Rangers","Celtic","Falkirk","Livingston")
(aF <- home_away_table(a, give=FALSE,teams = teams))
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA 81+42i   69+2i      15+0i
##   Celtic      76+46i     NA   61+8i      15+0i
##   Falkirk     17+57i 17+43i      NA       6+8i
##   Livingston   2+12i  2+11i    3+9i         NA
(aT <- home_away_table(a, give=TRUE, teams = teams))
## $home_wins
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     81      69         15
##   Celtic          76     NA      61         15
##   Falkirk         17     17      NA          6
##   Livingston       2      2       3         NA
## 
## $away_wins
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     42       2          0
##   Celtic          46     NA       8          0
##   Falkirk         57     43      NA          8
##   Livingston      12     11       9         NA
## 
## $draws
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     44      14          2
##   Celtic          47     NA      10          3
##   Falkirk         12     19      NA          4
##   Livingston       4      4       4         NA

(We will ignore draws for the moment, and consider only won games)

(H <- home_away(aF))
## log(Celtic^96 * (Celtic + Falkirk + home)^-129 * (Celtic + Livingston +
## home)^-28 * (Celtic + Rangers + home)^-245 * (Celtic + home)^152 *
## Falkirk^19 * (Falkirk + Livingston + home)^-26 * (Falkirk + Rangers +
## home)^-145 * (Falkirk + home)^40 * Livingston^8 * (Livingston + Rangers
## + home)^-29 * (Livingston + home)^7 * Rangers^115 * (Rangers +
## home)^165)
pnames(H) <- c(teams,"home")
H
## log(Celtic^96 * (Celtic + Falkirk + home)^-129 * (Celtic + Livingston +
## home)^-28 * (Celtic + Rangers + home)^-245 * (Celtic + home)^152 *
## Falkirk^19 * (Falkirk + Livingston + home)^-26 * (Falkirk + Rangers +
## home)^-145 * (Falkirk + home)^40 * Livingston^8 * (Livingston + Rangers
## + home)^-29 * (Livingston + home)^7 * Rangers^115 * (Rangers +
## home)^165)
(mH <- maxp(H))
##    Rangers     Celtic    Falkirk Livingston       home 
##   0.448042   0.390123   0.055898   0.039337   0.066600
samep.test(H,c("Rangers","Celtic"))
## 
##  Constrained support maximization
## 
## data:  H
## null hypothesis: Rangers = Celtic
## null estimate:
##    Rangers     Celtic    Falkirk Livingston       home 
##   0.419203   0.419203   0.055914   0.039368   0.066313 
## (argmax, constrained optimization)
## Support for null:  -316.2 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##    Rangers     Celtic    Falkirk Livingston       home 
##   0.448042   0.390123   0.055898   0.039337   0.066600 
## (argmax, free optimization)
## Support for alternative:  -315.61 + K
## 
## degrees of freedom: 1
## support difference = 0.58905
## p-value: 0.27774
specificp.gt.test(H,"home",0)
## 
##  Constrained support maximization
## 
## data:  H
## null hypothesis: sum p_i=1, home <= 0 (notional)
## null estimate:
##    Rangers     Celtic    Falkirk Livingston       home 
## 2.3777e-01 2.7020e-01 2.0786e-01 2.8415e-01 9.9999e-06 
## (argmax, constrained optimization)
## Support for null:  -406.13 + K
## 
## alternative hypothesis:  sum p_i=1 
## alternative estimate:
##    Rangers     Celtic    Falkirk Livingston       home 
##   0.448042   0.390123   0.055898   0.039337   0.066600 
## (argmax, free optimization)
## Support for alternative:  -315.61 + K
## 
## degrees of freedom: 1
## support difference = 90.519
## p-value: 2.8772e-41 (one-sided)
home_away3(aF,lambda=1.88)
## log( (Celtic=1)^96 * (Celtic=1, Falkirk=1.88)^-60 * (Celtic=1,
## Livingston=1.88)^-13 * (Celtic=1, Rangers=1.88)^-123 *
## (Celtic=1.88)^152 * (Celtic=1.88, Falkirk=1)^-69 * (Celtic=1.88,
## Livingston=1)^-15 * (Celtic=1.88, Rangers=1)^-122 * (Falkirk=1)^19 *
## (Falkirk=1, Livingston=1.88)^-12 * (Falkirk=1, Rangers=1.88)^-71 *
## (Falkirk=1.88)^40 * (Falkirk=1.88, Livingston=1)^-14 * (Falkirk=1.88,
## Rangers=1)^-74 * (Livingston=1)^8 * (Livingston=1, Rangers=1.88)^-15 *
## (Livingston=1.88)^7 * (Livingston=1.88, Rangers=1)^-14 *
## (Rangers=1)^115 * (Rangers=1.88)^165)
maxp(home_away3(aF,lambda=1.88),give=1)
## [[1]]
##     Celtic    Falkirk Livingston    Rangers 
##   0.412137   0.072163   0.041537   0.474162 
## 
## $`Log-likelihood`
## [1] -306.09
maxp(home_away3(aF,lambda=1.11),give=1)
## [[1]]
##     Celtic    Falkirk Livingston    Rangers 
##   0.409574   0.080117   0.046499   0.463809 
## 
## $`Log-likelihood`
## [1] -317.03
f <- function(l){ maxp(home_away3(aF,l),give=TRUE,n=1)$likes}
lam <- seq(from=1.4,to=2.3,len=10)
date()
## [1] "Tue Apr 21 08:40:04 2026"
loglike <- sapply(lam,f)
date()
## [1] "Tue Apr 21 08:40:27 2026"
(o <- optimize(f,c(1.1,4.6),maximum=TRUE))
## $maximum
## [1] 1.7739
## 
## $objective
## [1] -305.93
o
## $maximum
## [1] 1.7739
## 
## $objective
## [1] -305.93
RCLF3_lambda_max <- o$maximum
maxL <- f(RCLF3_lambda_max)

W <- maxL - f(1)    # f(1) being the likelihood of the null
W
## [1] 16.724
pchisq(2*W,df=1,lower.tail=FALSE)
## [1] 7.317e-09
(H <- home_away3(home_away_table(a,give=FALSE),RCLF3_lambda_max))  # o$maximum ~= 1.353
## log( (Celtic=1)^96 * (Celtic=1, Falkirk=1.7739)^-60 * (Celtic=1,
## Livingston=1.7739)^-13 * (Celtic=1, Rangers=1.7739)^-123 *
## (Celtic=1.7739)^152 * (Celtic=1.7739, Falkirk=1)^-69 * (Celtic=1.7739,
## Livingston=1)^-15 * (Celtic=1.7739, Rangers=1)^-122 * (Falkirk=1)^19 *
## (Falkirk=1, Livingston=1.7739)^-12 * (Falkirk=1, Rangers=1.7739)^-71 *
## (Falkirk=1.7739)^40 * (Falkirk=1.7739, Livingston=1)^-14 *
## (Falkirk=1.7739, Rangers=1)^-74 * (Livingston=1)^8 * (Livingston=1,
## Rangers=1.7739)^-15 * (Livingston=1.7739)^7 * (Livingston=1.7739,
## Rangers=1)^-14 * (Rangers=1)^115 * (Rangers=1.7739)^165)
maxp(H,give=1)
## [[1]]
##     Celtic    Falkirk Livingston    Rangers 
##   0.411604   0.073572   0.042497   0.472326 
## 
## $`Log-likelihood`
## [1] -305.93
RCLF3 <- H
plot(lam,loglike-max(loglike),type='b')
abline(h=c(0,-2))
abline(v=1)
grid()

(mH2 <- round(mH*1000))
##    Rangers     Celtic    Falkirk Livingston       home 
##        448        390         56         39         67
RCLF3_lambda_max <- o$maximum
RCLF3_maxp <- maxp(home_away3(home_away_table(a,give=FALSE),RCLF3_lambda_max),give=FALSE)
round(RCLF3_maxp * 1000)
##     Celtic    Falkirk Livingston    Rangers 
##        412         74         42        472

Rangers vs Falkirk

(Rangers at home)

actual scoreline: 69(R) / 2(F) or about \((0.972; 0.028)\). Expectation would be (64;7) (H2) or (66.3;4.7) (H3)

(Rangers away)

actual scoreline: 57(R) /17(F) - or about \((0.77; 0.23)\)

Rangers vs Livingston

(Rangers at home)

actual scoreline: 15(R) / 0(L) or \((1; 0)\)

(Rangers away)

actual scoreline: 12(R) /2(L) - or about \((0.86; 0.14)\)

Celtic vs Falkirk

(Celtic at home)

actual scoreline: 61-8 or about \((0.88; 0.12)\)

(Celtic away)

actual scoreline: 43-17 or about \((0.72; 0.28)\)

Livingston vs Falkirk

(Livingston at home)

actual scoreline: 3-9 or about \((0.25; 0.75)\)

(Livingston away)

actual scoreline: 6-8 or about \((0.43; 0.57)\)

6 Expectations

First, hyper2:

mH
##    Rangers     Celtic    Falkirk Livingston       home 
##   0.448042   0.390123   0.055898   0.039337   0.066600
mH["Rangers"]
## Rangers 
## 0.44804
mH["home"]
##   home 
## 0.0666
RCLF3_maxp
##     Celtic    Falkirk Livingston    Rangers 
##   0.411604   0.073572   0.042497   0.472326
lambda <- RCLF3_lambda_max
h2_exp <- matrix(0,4,4)
rownames(h2_exp) <- teams
colnames(h2_exp) <- teams
h3_exp <- h2_exp
n_matches <- Re(aF) + Im(aF)
for(hometeam in teams){
    for(awayteam in teams){

        h2_exp[hometeam,awayteam] <- (
            n_matches[hometeam,awayteam] * (mH[hometeam] + mH["home"])/
            (mH[hometeam] + mH[awayteam] + mH["home"])
        )
        h2_exp[hometeam,awayteam] %<>%
            add(1i*
                n_matches[hometeam,awayteam] * (mH[awayteam])/
                (mH[hometeam] + mH[awayteam] + mH["home"])
                )
        
        h3_exp[hometeam,awayteam] <- (
            n_matches[hometeam,awayteam] * (lambda* RCLF3_maxp[hometeam])/
            (lambda* RCLF3_maxp[hometeam] + RCLF3_maxp[awayteam])
        )

        h3_exp[hometeam,awayteam] %<>%
            add(1i*
                n_matches[hometeam,awayteam] * (RCLF3_maxp[awayteam])/
                (lambda* RCLF3_maxp[hometeam] + RCLF3_maxp[awayteam])
        )
    }
}
aF
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA 81+42i   69+2i      15+0i
##   Celtic      76+46i     NA   61+8i      15+0i
##   Falkirk     17+57i 17+43i      NA       6+8i
##   Livingston   2+12i  2+11i    3+9i         NA
round(h2_exp,1)
##               Rangers     Celtic   Falkirk Livingston
## Rangers            NA 70.0+53.0i 64.0+7.0i  13.9+1.1i
## Celtic     61.6+60.4i         NA 61.5+7.5i  13.8+1.2i
## Falkirk    15.9+58.1i 14.3+45.7i        NA  10.6+3.4i
## Livingston  2.7+11.3i  2.8+10.2i  7.9+4.1i         NA
round(h3_exp,1)
##               Rangers     Celtic   Falkirk Livingston
## Rangers            NA 82.5+40.5i 65.3+5.7i  14.3+0.7i
## Celtic     74.1+47.9i         NA 62.7+6.3i  14.2+0.8i
## Falkirk    16.0+58.0i 14.4+45.6i        NA  10.6+3.4i
## Livingston  1.9+12.1i  2.0+11.0i  6.1+5.9i         NA
makev <- function(M){
    out <- c(Re(M),Im(M))
    out[!is.na(out)]
}
    

(obs <- makev(aF))
##  [1] 76 17  2 81 17  2 69 61  3 15 15  6 46 57 12 42 43 11  2  8  9  0  0  8
(exp2 <- makev(h2_exp))
##  [1] 61.5852 15.8882  2.6772 69.9640 14.3378  2.7762 64.0439 61.4760  7.8552
## [10] 13.9349 13.8105 10.5970 60.4148 58.1118 11.3228 53.0360 45.6622 10.2238
## [19]  6.9561  7.5240  4.1448  1.0651  1.1895  3.4030
(exp3 <- makev(h3_exp))
##  [1] 74.07873 16.02045  1.92692 82.48065 14.44451  2.01239 65.26874 62.68371
##  [9]  6.07304 14.27591 14.17496 10.56105 47.92127 57.97955 12.07308 40.51935
## [17] 45.55549 10.98761  5.73126  6.31629  5.92696  0.72409  0.82504  3.43895
plot((obs-exp2)^2/exp2)

plot((obs-exp3)^2/exp3)

chisq2 <- sum((obs-exp2)^2/exp2)
chisq3 <- sum((obs-exp3)^2/exp3)

pchisq(chisq2,df=12-4,lower.tail=FALSE)
## [1] 2.2927e-05
pchisq(chisq3,df=12-4,lower.tail=FALSE)
## [1] 0.032033
par(pty='s')
plot(obs,exp2,log="xy",asp=1,xlim=c(1,100),ylim=c(1,100))
## Warning in xy.coords(x, y, xlabel, ylabel, log): 2 x values <= 0 omitted from
## logarithmic plot
abline(0,1)

plot(obs,exp3,log="xy",asp=1,xlim=c(1,100),ylim=c(1,100))
## Warning in xy.coords(x, y, xlabel, ylabel, log): 2 x values <= 0 omitted from
## logarithmic plot
abline(0,1)

expectation2 <- function(probs, results){
    teams <- names(probs[names(probs) != "home"])
    h2_exp <- matrix(0,4,4)
    rownames(h2_exp) <- teams
    colnames(h2_exp) <- teams
    n_matches <- Re(aF) + Im(aF)
    for(hometeam in teams){
        for(awayteam in teams){
            
            h2_exp[hometeam,awayteam] <- (
                n_matches[hometeam,awayteam] * (mH[hometeam] + mH["home"])/
                (mH[hometeam] + mH[awayteam] + mH["home"])
            )
            
            h2_exp[hometeam,awayteam] %<>%
                add(1i*
                    n_matches[hometeam,awayteam] * (mH[awayteam])/
                    (mH[hometeam] + mH[awayteam] + mH["home"])
                    )
        }
    }
    return(h2_exp)
}
round(expectation2(mH,aF),1)
##               Rangers     Celtic   Falkirk Livingston
## Rangers            NA 70.0+53.0i 64.0+7.0i  13.9+1.1i
## Celtic     61.6+60.4i         NA 61.5+7.5i  13.8+1.2i
## Falkirk    15.9+58.1i 14.3+45.7i        NA  10.6+3.4i
## Livingston  2.7+11.3i  2.8+10.2i  7.9+4.1i         NA
expectation3 <- function(probs, results, lambda){
    teams <- names(probs)
    h3_exp <- matrix(0,4,4)
    rownames(h3_exp) <- teams
    colnames(h3_exp) <- teams
    n_matches <- Re(aF) + Im(aF)
    for(hometeam in teams){
        for(awayteam in teams){
            
            h3_exp[hometeam,awayteam] <- (
                n_matches[hometeam,awayteam] * (lambda* RCLF3_maxp[hometeam])/
                (lambda* RCLF3_maxp[hometeam] + RCLF3_maxp[awayteam])
            )
            
            h3_exp[hometeam,awayteam] %<>%
                add(1i*
                    n_matches[hometeam,awayteam] * (RCLF3_maxp[awayteam])/
                    (lambda* RCLF3_maxp[hometeam] + RCLF3_maxp[awayteam])
                    )
        }
    }
    return(h3_exp)
}

round(expectation3(RCLF3_maxp[c("Rangers","Celtic","Falkirk","Livingston")],aF,RCLF3_lambda_max),1)
##               Rangers     Celtic   Falkirk Livingston
## Rangers            NA 82.5+40.5i 65.3+5.7i  14.3+0.7i
## Celtic     74.1+47.9i         NA 62.7+6.3i  14.2+0.8i
## Falkirk    16.0+58.0i 14.4+45.6i        NA  10.6+3.4i
## Livingston  1.9+12.1i  2.0+11.0i  6.1+5.9i         NA

7 Bring back draws

aT
## $home_wins
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     81      69         15
##   Celtic          76     NA      61         15
##   Falkirk         17     17      NA          6
##   Livingston       2      2       3         NA
## 
## $away_wins
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     42       2          0
##   Celtic          46     NA       8          0
##   Falkirk         57     43      NA          8
##   Livingston      12     11       9         NA
## 
## $draws
##             @away
## @home        Rangers Celtic Falkirk Livingston
##   Rangers         NA     44      14          2
##   Celtic          47     NA      10          3
##   Falkirk         12     19      NA          4
##   Livingston       4      4       4         NA
home_draw_away3(aT,lambda=1.888,D=0.3)
## log( (Celtic=0.3, Falkirk=0.3)^29 * (Celtic=0.3, Livingston=0.3)^7 *
## (Celtic=0.3, Rangers=0.3)^91 * (Celtic=1)^96 * (Celtic=1.3,
## Falkirk=2.188)^-79 * (Celtic=1.3, Livingston=2.188)^-17 * (Celtic=1.3,
## Rangers=2.188)^-167 * (Celtic=1.888)^152 * (Celtic=2.188,
## Falkirk=1.3)^-79 * (Celtic=2.188, Livingston=1.3)^-18 * (Celtic=2.188,
## Rangers=1.3)^-169 * (Falkirk=0.3, Livingston=0.3)^8 * (Falkirk=0.3,
## Rangers=0.3)^26 * (Falkirk=1)^19 * (Falkirk=1.3, Livingston=2.188)^-16
## * (Falkirk=1.3, Rangers=2.188)^-85 * (Falkirk=1.888)^40 *
## (Falkirk=2.188, Livingston=1.3)^-18 * (Falkirk=2.188, Rangers=1.3)^-86
## * (Livingston=0.3, Rangers=0.3)^6 * (Livingston=1)^8 * (Livingston=1.3,
## Rangers=2.188)^-17 * (Livingston=1.888)^7 * (Livingston=2.188,
## Rangers=1.3)^-18 * (Rangers=1)^115 * (Rangers=1.888)^165)
f <- function(v){
    H <- home_draw_away3(aT, lambda = v[1], D=v[2])
    maxp(H,give=1)$`Log-likelihood`
}
n <- 2
lambda_try <- seq(from=1,to=3,len=n)
D_try <- seq(from=0.1,to=0.6,len=n)
V <- as.matrix(expand.grid(lambda_try,D_try))
LL <- apply(V,1,f)
LL <- matrix(LL,n,n) - max(LL)
LL <- pmax(LL,-10)
contour(lambda_try, D_try, LL,xlab="lambda",ylab="D")

Now find the evaluate:

maxv <- optim(par=c(3,0.7), fn=f, control=list(fnscale = -1))
maxv
## $par
## [1] 1.72475 0.37177
## 
## $value
## [1] -708.1
## 
## $counts
## function gradient 
##       53       NA 
## 
## $convergence
## [1] 0
## 
## $message
## NULL
MLparams <- maxv$par
mH <- maxp(home_draw_away3(aT, lambda = MLparams[1], D=MLparams[2]),give=1)
mH
## [[1]]
##     Celtic    Falkirk Livingston    Rangers 
##   0.411435   0.075041   0.043166   0.470357 
## 
## $`Log-likelihood`
## [1] -708.1

8 Appendix

Here is a transcript of a mathematica session which establishes the estimates above. First the hyper2 likelihood:

rhankin@rhrpi4:~ $ wolfram
Mathematica 12.2.0 Kernel for Linux ARM (32-bit)
Copyright 1988-2021 Wolfram Research, Inc.


In[1]:= L = Log[M+p1]*A +Log[p2]*B- Log[M+p1+p2]*(A+B+C+D) + Log[p2+M]*C + Log[p1]*D 

Out[1]= D Log[p1] + A Log[M + p1] + B Log[p2] + C Log[M + p2] - (A + B + C + D) Log[M + p1 + p2] 

In[2]:= LL = L /. {p2 -> 1-p1-M} 

Out[2]= C Log[1 - p1] + B Log[1 - M - p1] + D Log[p1] + A Log[M + p1] 

In[3]:= dp = D[LL,p1]//FullSimplify 

            C      D         B          A
Out[3]=  ------- + -- + ----------- + ------
         -1 + p1   p1   -1 + M + p1   M + p1

In[4]:= dM = D[LL,M]//FullSimplify 

              B          A
Out[4]= ----------- + ------
         -1 + M + p1   M + p1

In[5]:= Solve[{dp==0,dM==0},{p1,M}] 

                   D            A C - B D
Out[5]=  {{p1 -> -----, M -> ---------------}}
                 C + D       (A + B) (C + D)


In[6]:= p2 = 1-p1-M /. {%} //FullSimplify 

             B
Out[6]=  {{-----}}
           A + B

Now the hyper3 likelihood:

rhankin@rhrpi4:~ $ wolfram
Mathematica 12.2.0 Kernel for Linux ARM (32-bit)
Copyright 1988-2021 Wolfram Research, Inc.

In[1]:= L = A*Log[lambda*p1] + B*Log[p2] -(A+B)*Log[lambda*p1 + p2] + C*Log[lambda*p2] + D*Log[p1] -(C+D)*Log[p1 + lambda*p2] 

Out[1]= D Log[p1] + A Log[lambda p1] + B Log[p2] + C Log[lambda p2] - (A + B) Log[lambda p1 + p2] - (C + D) Log[p1 + lambda p2]

In[2]:= LL = L /. {p2 -> 1-p1} 

Out[2]= B Log[1 - p1] + C Log[lambda (1 - p1)] + D Log[p1] + A Log[lambda p1] - (C + D) Log[lambda (1 - p1) + p1] - 
 
>    (A + B) Log[1 - p1 + lambda p1]

In[3]:= dp1 = D[LL,p1] //FullSimplify 

         B + C    A + D   (A + B) (-1 + lambda)    (C + D) (-1 + lambda)
Out[3]= ------- + ----- - --------------------- + -----------------------
        -1 + p1    p1     1 + (-1 + lambda) p1    lambda + p1 - lambda p1


In[4]:= dlam = D[LL,lambda]//FullSimplify                                                                                                     

          A        C           (A + B) p1           (C + D) (-1 + p1)
Out[4]= ------ + ------ - -------------------- + -----------------------
        lambda   lambda   1 + (-1 + lambda) p1   lambda + p1 - lambda p1


In[5]:= Solve[{dp1==0,dlam==0},{p1,lambda}]//FullSimplify 

[snip]


In[6]:= %[[1]] 

                        1                     Sqrt[A] Sqrt[C]
Out[6]= {p1 -> -------------------, lambda -> ---------------}
                   Sqrt[B] Sqrt[C]            Sqrt[B] Sqrt[D]
               1 + ---------------
                   Sqrt[A] Sqrt[D]

8.0.1 References

  • R. R. Davidson and R. J. Beaver 1977. “On extending the Bradley-Terry model to incorporate within-pair order effects” Biometrics, 33:693–702

Package dataset

Following lines create RCLF.rda, residing in the data/ directory of the package.

RCLF3_table <- home_away_table(a, give=TRUE, teams = teams)

save(RCLF3_table,  RCLF3, RCLF3_maxp, RCLF3_lambda_max, file="RCLF.rda")
Agresti, A. 2002. Categorical Data Analysis. John Wiley & Sons.
Hankin, R. K. S. 2017. “Partial Rank Data with the hyper2 Package: Likelihood Functions for Generalized Bradley-Terry Models.” The R Journal 9 (2): 429–39.