Markov chain for Abey and Kris


I need some help with the implementation of a Markov chain for RPS.
I used the mchmm Lib. It works fine for Quincy and Mrugesh, but I have problems with Kris and Abey.

Kris isn’t playing any complex algorithm. But I thought I could win against him if I only played some moves randomly, but I guess I have some thinking errors here.

Your code so far

import random
import mchmm as mc
import numpy as np

PLAYS = ['R', 'P', 'S']
WIN_DICT = {'R': 'P', 'P': 'S', 'S': 'R'}

def player(prev_play, opponent_history=[]):  
    if prev_play in PLAYS:
    guess = predict(prev_play, opponent_history)
    if len(opponent_history) >= MEMORY_LENGTH:
    return guess

def predict(prev_play,opponent_history):   
    if len(opponent_history) % RANDOMNESS_FACTOR == 0 or not prev_play in PLAYS:
        guess = random.choice(PLAYS)
        guess = predict_with_Makrov(prev_play, opponent_history)
    return guess 

def predict_with_Makrov(prev_play, opponent_history):        
    chain = generateMarkovChain(opponent_history)
    most_likely_move = random.choice(PLAYS)
    if chain is not None:
       _, states = chain.simulate(1, start=prev_play, seed=np.random.randint(0, 10, 10))
       most_likely_move =  "".join(states)
    guess = WIN_DICT[most_likely_move]
    return guess

def generateMarkovChain(opponent_history):
    a = mc.MarkovChain().from_data(opponent_history)
    return a

Would you have some suggestions or tips on how I can still win against the other two players?

Thanks a lot

Challenge: Rock Paper Scissors

Link to the Challenge:

I doubt that it’s going to help out your situation, but you may want to print prev_play and most_likely_move side-by-side in predict_with_Markov. I think the start item in the simulate function is included in the output chain, which would mean the previous play and predicted next play are always the same. Incidentally, that would be the “kris” strategy, and while not complex it clearly works against certain opponent strategies.

While the “quincy” and “mrugesh” play strategies are pattern based, both the “abbey” and “kris” play strategies are directly dependent on the previous player’s play. I feel like that would lead a markov chain to a self-fulfilling prophecy type situation no? The opponent is basing their play off the markov chain’s last play and the markov chain predictor is basing it’s play off the opponent’s last play.

Personally, I would be considering this bit of the readme file:

Hint: To defeat all four opponents, your program may need to have multiple strategies that change depending on the plays of the opponent.

If you know that the opponent is going to base their play off your last play, that gives you quite the advantage.

1 Like

A Markov chain algorithm will easily defeat all four players, but not necessarily with the same chain length. I haven’t looked at the library you are using, but I would suggest trying different chain lengths against all the opponents and then maybe introducing some memory effects if that does not work. Abbey may need more modification than the others.

It’s simple enough to implement this algorithm yourself, so you might try that. If you need an example, just look at the code for Abbey since she is a Markov chain player (length 2). There is a defunct RPS bot tournament on the web; some of their code is still available for more hints.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.