## Trying to find math inside everything else

### Some Genshin Impact Math

I recently wrote a post about some math I did for Genshin Impact on Reddit, and I figured I’d post it here, more for the math part than the game part. See below (with additional edits for clarification for non-players.)

With the release of Kuki Shinobu and her expedition talent (which Yelan and Shenhe also have, but I didn’t pull them, so I wasn’t thinking about it), I started to wonder about the comparative benefits of these talents vs the quicker expedition talents of Bennett/Fischl/Chongyun/Keqing/Kujo Sara.

Shinobu has the following talent: “Gains 25% more rewards when dispatched on an Inazuma Expedition for 20 hours.”

Sara has the following talent: “When dispatched on an expedition in Inazuma, time consumed is reduced by 25%.”

The first thought is to compare them directly. Shinobu gives 25% extra rewards every 20 hours, and Sara gives regular rewards 25% faster, so every 15 hours. While both say 25%, you are actually collecting rewards with Sara 33% more frequently, and so it is a better talent. (Over the course of 5 days, Sara would get you 40000 Mora, while Shinobu would get you 37500.)

Of course, that requires doing your expeditions immediately upon completion, which will require a shifting schedule and waking up in the middle of the night and such, and is thus fairly unrealistic. So let’s look at a more realistic model.

It would be reasonable to check Shinobu‘s expedition once a day, as 20 hours is close to 24. With Sara’s 15, however, you could do a 2-1 cycle: on the first day, check right when you wake up and right before bed (as most people are awake 16 hours), and then the next day check it in the middle of the day. (For example, 7 AM, 10PM, and then between 1 and 4 PM the next day, so it ready by 7AM the next day. This gives you some wiggle room.)

With this method, Sara is doing 50% more expeditions compared to Shinobu‘s 25% bonus, an even bigger difference than before! Over a 6-day period, Shinobu would bring 37500 Mora, while Sara would get 45000.

However, it’s pretty easy to mess up that 2-1 cycle. Sometimes I would have a class when my expeditions were done, and so couldn’t check, and then wouldn’t remember until after work, which would make my morning expedition late, and then my night one would fall until after I went to sleep. So now my question is, how often can I mess up the cycle and still have it be better than Shinobu?

Consider that same 6-day period. If I mess up on one day, it actually doesn’t change anything. (My 2-1-2-1-2-1 cycle becomes 2-1-1-2-1-2, and the next cycle of 6 days is 1-2-1-2-1-2, so still 9 expeditions per cycle.) However, the second mistake will make it so there’s only 8 expeditions per 6 days, which is equal to Shinobu’s. Similarly, the 3rd mistake is fine but the 4th one will drop you below Shinobu’s bonus rate.

So one way to look at it is if you can keep a rate of 2 double-days every 6 days, Sara and Shinobu are tied. If you can do more frequently, Sara is better. If you can’t, Shinobu is better.

Another way is to think more long term. Over a 30-day period, you would need 11 or more double-days for Sara to beat out Shinobu. So you could mess up on 9 days and still come out ahead (30% error rate). Over a 300-day period, you need 101 or more double-days, so you can mess up 99 times (33% error rate). As you might be able to tell, this limit approaches 1/3, so you can mess up on average (fewer than) 1/3 days and still come out tied or ahead.

### Potluck Math

I was talking to one of my co-workers about a “Friendsgiving” she is holding, and how the food bill is getting up there as more people are invited. But some of those people are also bringing food – and everyone is worried about having enough.

I realized this is a very common problem with potluck meals. Everyone wants to make sure they have enough food, so the more guests, the more they make. But think about this –

At a 4-person meal, each person makes a dish that feeds 4. (4 servings). So each person then eats 4 servings of food. (Which seems like a normal amount.)

Now it’s a 20 person meal, and each person makes a dish that feeds 20. So now each person eats 20 servings? That seems unlikely – it’s much more likely that people eat 3-6 servings, for 60-120 servings eaten, leaving 80 servings of food left over.

The problem here is that each attendee is treating the problem linearly, when it would better be modeled quadratically. Of course, this is complicated by the one hit dish that every eats a full serving of, and that other dish that no one eats, and everyone wanting to try a little of everything, so figuring out how much to cook can get complicated pretty quickly.

### How to Pack Your Boardgames

Last year, before Twitter Math Camp, I was packing and trying to figure out which games to bring with me for the game night we were having before the conference started. I basically had three attributes I was considering: how big the game was, how good it was, and how many players could play it. I wanted to minimize the first one while maximizing the latter two.

So I tried to come up with a bunch of formulas for figuring it out, but nothing was quite working out. (I used BoardGameGeek ratings for “how good it was.”) At first I tried doing ${\frac{r \cdot p}{v}}$, but it was putting some games that just weren’t very good as top choices. The problem was that the volume was having too big of an effect – games could come in thousands of cubic centimeters of volume, but max at around 8 for rating and 12 for players. (I had to use amazon.ca to look up the dimensions because I wanted to use centimeters.)

So then I tried cube rooting the volume, or doing an exponential functions like ${\frac{p \cdot e^{r}}{v}}%s=2$, or finding the geometric mean of the three numbers, but still nothing came out right.

I was basically using three games as test cases: Dominion, which is one of the best games I own but it really big; Pixel Tactics, which is one of the smallest but is only 2 players; and The Resistance, which is small-ish, really good, and can go to 10 players. I figured that any good method should tell me to leave the first two games at home, but to bring the Resistance. If they didn’t, it wasn’t right.

Eventually, after doing some research, I determined that a common technique used in psychology when comparing variables of different ranges of values is called standardizing the variables. Basically, for each attribute, I would find the mean and standard deviation. Then, for each game, I would subtract its value from the mean and divide by the standard deviation to get a standardized value. Then I just needed to add up the three standardized values and the ones with the highest score would win. And, as predicted, The Resistance came out on top.

### The Math of Bedroom Compatibility

On OKCupid, one of the match questions is the following:

“Once you are intimate, how often would you and your significant other have sex?

– Every day
– Once or twice a week

– A few times a month or less”

On OKCupid, you choose your own answer and then pick what answer you’d like potential matches to answer. It seems straight-forward – if the other person picks the same answer as you, it’ll be fine. But will it?

Let’s make the following assumptions.

• A person is either in the mood to have sex on a given day, or they are not.
• Two people only have sex if both are in the mood.
• If someone is in the mood and has sex, they are happy. If they are not in the mood and don’t have sex, they are happy.
• If someone is in the mood and does not have sex, then they are unhappy.

If both people choose “Every Day,” then it will be fine; both people will be happy every day.

If both people choose “Every other day,” let’s assume they are in the mood 4/7 days of the week. So on a given day, there is a 4/7 chance of being in the mood.

It follows, then, that on any given day the chance of both people being in the mood is 16/49, or ~32.65%. And so the probability of having sex 4/7 a week is 7C4*(0.3265)^4 * (0.6735)^3 = 12.15%.

So that’s an 88% chance of not being satisfied in a given week. Well, that didn’t work out.

(Of course, the assumptions aren’t perfect – mostly because being in the mood might carry over if the itch wasn’t scratched.)

### The Evil Queen Steals Hearts

Part 3 of my Disney Analysis.

Despite my love for Disney films, I acknowledge that not all of them are, well, good. But after working on the last two posts, I wondered what effect those characteristics have on how well a movie is rated. Do critics share Disney’s sense of justice and just want to see those villains die? Do audiences actually prefer movies with male protagonists, as much of Hollywood seems to believe? To find out, I collected the tomatometer scores of all of the movies from my list on RottenTomatoes.com – both the critic score and the audience score as, though there is a correlation between the two, it’s a moderately weak one.

I mostly included this because I wanted a graph that wasn’t a box plot.

So first, let’s look at the fates of the villains vs how they scored. I create two plots: one for the critical scores and one for the audience scores.

From these I can conclude…that there’s not much connection between the fate of the villain and how audiences react. We can say that there is a slight audience preference for movies that actually have a concrete antagonist, and we may also be able to say that critics and villains have a slight preference for movies where the antagonist is merely thwarted, but it’s not a strong connection.

What about gender, though? How much does that have an effect? Let’s look at villain gender first.

The conclusions I can make? People love those lady villains! Despite the fact that 70% of villains are male (or maybe because of that fact), audiences and critics agree that the movies with female villains are better movies across the board. (The audience preference for female villains is not as strong as the critical one, but it’s still there.)

And as for the protagonists?

Conclusions: There’s a clear critical preference for movies that have male AND female protagonists, to give access points to all viewers, whereas audience members merely as less likely to think badly of those. There’s a slight preference for female protagonists among both audience members and critics as well, though it’s very slight.

So what could Disney learn from all this? Well, that clearly we want to see a movie with a pair of heroes, male & female, that face off against a female villain and defeat her without killing her. So make that happen, Disney.

(Oh, what? The next movie has a male protagonist and a male villain? Go figure.)

###### The Data Set

Disney Data

Part 2 of my Disney Analysis.

While doing my research yesterday, I had this exchange:

So I thought about what effect gender might have on things. Let’s take a look.

First, let’s look at the gender breakdown of the movies in general, both of the antagonists and the protagonists. (Some movies don’t have only one protagonist, so some movies are labeled as having both male and female protagonists.)

As we can see, despite Disney’s princess movies, Disney animated films are overwhelmingly male (much like most of Hollywood). Interestingly, the rates of movies with male villains and male protagonists are the same, about 70%.

I also looked at how the genders match up – do female villains only face off against female protagonists, for example?

Here we see that there’s no strong associations with male villains – either gender of protagonist can face a male villain. However, there is a strong dissociation of male protagonists to female villains – in fact, there’s only two movies that have male protagonists and a female villain – The Emperor’s New Groove and Meet the Robinsons, both in the current century. (And the “female” in Meet the Robinsons is a robot.) Female villains will also go up against an ensemble of protagonist that includes males, but in general they must go against a female main character. We also see that those man vs self and man vs society movies are literally “man” – only one female protagonist out of the 8.

Now, what about my idea that gender affects their fate? First, let’s check the gender of the villain.

Turns out I was wrong – there’s no association between gender and death (or banishment). 57% of all villains die, while 56% of male villains die and 60% of female villains do. There’s a slight association with male villains being imprisoned while female villains are merely thwarted, but the sample size for those is much smaller. But while the distribution of genders for the villains is lopsided, how they treat those villains is pretty equitable.

What about the heroes? Do the male heroes cause all the death?

No, not really. The death stats are pretty close to the overall stats. I do see an association with male protagonists banishing their foes while female ones imprison them, though.

However, as Elena said above, most Disney deaths are not directly caused by the heroes – they are often accidental or caused by the villain themselves. By my reckoning, there are only 5 villains that are directly killed – Maleficent, Ursula, Scar, Shan Yu, and Captain Rourke.

Now, you may think, “Well, James, 3 of those movies have female protagonists and 2 male, so there’s no association, right?” Well, yes…but then, think about who actually deals the killing blow: Prince Philip kills Maleficent, Prince Eric kills Ursula, the Hyenas kill Scar, Mushu kills Shan Yu, and Milo kills Rourke. Yes, even Mulan does not actually land the final killing blow, though she arranges all the circumstances of that death and should be credited with it.

That’s right, these two are literally lady killers.

### Disney Justice

Part 1 in a 3-part series of Disney analysis.

I was out on Rob’s balcony this morning and my stream of consciousness was something like this: I shouldn’t stand so close to the edge, I might fall, no I won’t, that’s silly, where did this irrational fear of falling come from, maybe it’s all those Disney villains I grew up on, all the villains always fall to their doom, hmm, you know, in Frozen the villain doesn’t die at the end, that seems pretty unusual to me, but is it?

So I decided to do some research and determine just how often Disney villains die. Below are my results, and some other conclusions.

(Notes about the data set: this only includes the animated features created by Walt Disney Animation Studios, not a subsidiary. This list also does not include any film that is not one continuous story – this leaves us with 43 films total. I also has to make some decisions between focusing on villains and antagonists. Some characters are villainous, like Mad Madam Mim from The Sword in the Stone, but she’s hardly a major antagonist in the film. Other characters, like Aunt Sarah in The Lady and the Tramp, are antagonistic but hardly evil. I’ve decided to focus just on antagonists.)

I categorized the fates in four ways – death, imprisonment (not always in an actual prison), banishment (or being driven off in some way), and thwarted (where the hero wins but nothing really bad happens to the villain, such as in Cinderella).

8 of the films have no real antagonist (as opposed to man vs man, their conflict would be classified as man vs society or man vs self. [And man vs nature in the case of Bambi.]) But in a majority of the remaining films that do have antagonists, the antagonist dies by the end. (Often by falling – 7 villains fall to their deaths.)

Another question then arose – has Disney always been this swift with the death penalty, or has that changed over time? So I made some box plots of the years for the different fates.

Though the very first Disney movie, Snow White, has the villain die, it’s a clear outlier – as is Sleeping Beauty. Most of the villain death occur during what is known as the Disney Renaissance, aka my childhood. The time of villains being defeated but without really changing their status quo harks back to an earlier time, whereas banishment and imprisonment as more universal. Interestingly, the films without villains all come from either the 40s or the 2000s. Neither is thought of as a big time for Disney films.

Below is my data set (spoilers). Perhaps more analysis will come in the future.

### We Didn’t Playtest This At All

Yesterday was my best friend’s birthday and his wife got him the game We Didn’t Playtest This At All, which is a very silly game that was tons of fun. (We probably played it about 15 times.) The point of the game is to win or, barring that, to make everyone else lose. And that’s all the rules there are, other than Draw 1, Play 1. Everything else is in the cards.

One set of cards in the game has players all throw out 1 to 5 fingers on the count of three:

Since you don’t know what card they are playing, even and odd really don’t matter. But winning on a prime…that’s interesting.

As I was leaving, I started to wonder if there was a best number you could throw out to maximize your chances of winnings (or, alternately, stopping to person who played the card from winning). Talking about it with another math teacher who was there, I hypothesized that, because of the lower density of prime numbers as numbers get larger, you’d want to throw smaller numbers to increase your chances of getting a prime.

But, of course, I couldn’t just leave that conjecture. I had to test it! For the purposes of this, I assumed all other players besides yourself throw out a random number of fingers, essentially becoming 5-sided dice.

It’s pretty simple to compute for two players:

• If I throw out a 1, it’ll be prime if my opponent throws 1, 2, or 4.
• If I throw 2, she needs to throw 1, 3, or 5.
• If I throw 3, she needs to throw 2 or 4.
• If I throw 4, she needs to throw a 1 or 3.
• If I throw a 5, she needs to throw a 2.

This supports my hypothesis: throwing a 1 or 2 increase the odds of a prime, and a 5 radically decreases them. (Of course, then we can get all game theoretical — if I know you’re gonna throw 5, I should throw 2. But then, if you know that, you should throw 4, etc.)

What about for more than 2 players? The game box says we can have up to 10. I worked it out somewhat in my notebook on my train ride home, but then I had the power of Excel. (It actually took me longer than I would like to admit to re-figure out how to find the probabilities of, say, getting a total of 12 when 3 people throw out. I was counting up all the possibilities for a while until I realized the recursive method for calculating those probabilities. And if Wolfram-Alpha hadn’t been so hard to use in this regard, I might not have figured it out myself.)

On the left are the probabilities that you opponents’ total will be a certain number. On the right is the number of ways you can get prime if you throw out that number.

For three players, 1 is still the champ is terms of getting you a prime, but surprisingly, 5 is second place! What had been the worst number to throw out to get primes for 2 players is now the second best with 3 players. And for 4 players, 1 and 5 are actually the worst (though only slightly), with 2, 3, and 4 coming out on top. But at this point, it’s pretty balanced. 5 players is almost equally likely no matter what you throw. It’s almost as if they playtested this?

But now, the pattern emerges.

When I extended to 6 or 7 players, though, it became clear that 1 really was the true winner and 5 the worst. Once we were out of the weeds of the prime-heavy teens, the hypothesis seems more true. (It also holds for 8 players.) Of course, I haven’t proven that it will always be true for 6+ players…but I leave that as an exercise to the reader.

### How Many Subway Transfers Do I Have to Make?

I was talking with Sam Shah and had the following exchange:

Of course, after I said that…I had to find out if it was actually true. So I pulled up a map of the subway system and started analyzing.

I realized the best way to analyze the system would be to create a matrix of connections: if I can transfer directly between two lines (or the walking transfer from 59th St/Lex to 63rd/Lex, since you don’t need to pay again), then put a 1 and make the cell green. If not, put a 0 and leave the cell white. That’ll show a chart of all the places you can get to on a single transfer.

Most of the lines have direct transfers, with a few being tricky. Breakout stars are the A, which connects with all but the 6, and the F, N, and R, which only miss some or all of the shuttles. Particularly difficult train lines are the G, the J, and the 6.

So this answers the question of where you can get with only 1 transfer. But what about two transfers? For that, we can multiply this matrix by itself. This is the result:

What do these numbers mean? Well, to explain, let’s look at the G –> 6, which I have highlighted in blue. The number there is 8. This means that there are 8 ways to get from the G to the 6 with two transfers:

G –> 7 –> 6
G –> D –> 6
G –> E –> 6
G –> F –> 6
G –> L –> 6
G –> M –> 6
G –> N –> 6
G –> R –> 6

So this chart shows that you can get from any line to any other with at most two transfers*, with one exception: the Rockaway Shuttle to the 6. However! Those stops aren’t solely serviced by the S. (The only stop in the system solely serviced by an S train is Park Pl, on the Franklin Ave Shuttle.)

Because of that, I can amend my statement to the following, which I have proven true:

During rush hour, you can get from any stop on the subway to any other with a maximum of two transfers.

But then, that gets me wondering further…this chart was just made if the connections exist, but they weren’t time-sensitive. For example, the M does not run at my stop at nights or on weekends. How would that change this chart? Especially when you consider that the E, which does not normally go to my stop, DOES at night. I leave that problem open.

———————–

* Of course, fewer transfers doesn't always mean better. If I wanted to get
from Astoria to Greenpoint, sure, I could take the N to the G, but that
requires going all the way through Manhattan, way down into Brooklyn, and
then back up. Instead, a quick hop from the N to the 7 to the G is much
more sensible, even if it is an extra transfer.

Excel File - Subway Analysis