Squall Moan: Small Clone clone

squallmoan

Ahhh, where it all started.

I was jamming with a friend in his basement and he had a bunch of pedals, which I was noodling around with. None really stuck out to me until this little guy. If you want a sample of what it sounds like, there are plenty of test drives on YouTube. You may recognize its sound from Nirvana songs (only 90s kids will myeh myeh myeeehhh). read more

The Knapsack Problem: Discrete Optimization, week 2

I’ve been doing this Coursera Discrete Optimization course with my friends. It’s a lot of fun and I’m learning a bunch. The instructor is a total goofball too, which is a plus. I’ve taken a handful of online courses before, but to be honest, the assignments have usually been pretty easy. Not so with this Discrete Optimization (DO (my initials!)) course! Each week, you have to solve 6 problems, and each is graded out of 10 depending on how well you do. I believe the breakdown is: 0/10 if your answer doesn’t even match the output format required, 3/10 if you do basically anything (even if your answer is quite wrong), 7/10 if you have an answer above some threshold but still not perfect, and 10/10 if your answer is optimal (I guess they know the optimal solution for all of them?). Usually, the problems increase in hardness throughout the set; often, the last one is difficult enough that (I believe we saw this said in the course forums by the instructor) it would be a challenge for people who specialize in DO for a living. I think that’s pretty cool! They usually give you a ton of practice problems of various difficulties, and (though I’m not 100% sure) I think the 6 you’re graded on are usually among those.

So what is DO? I certainly didn’t know when I started this course, though I guess I should’ve been able to guess. Optimization is what it sounds like, finding the best solution you can for given problems. The “discrete” part is that the quantities involved are integers or discrete (that’s the name in the title!!!) components. It turns out I had actually heard of many of the problems that DO applies to before, but didn’t know they were DO. I had heard most of them in the context of P vs NP complexity.

Anyway, enough of that. Week 1 didn’t have an assignment, so the first one was in week 2, and it was the famous “knapsack problem”. I’m guessing if you’re reading this (sike, no one reads this!), you’re familiar with it, but in case you aren’t, it’s as follows. You have a knapsack with a certain (weight) capacity, K. There are a bunch of items in front of you, and each one has a certain weight (w) and value (v). You wanna fill your backpack with the most valuable (i.e., largest combined value) set of items you can, and you can only take the items whole (i.e., you can’t break one in half to fill the bag completely).

Because I’ll be referring to it a few times, the format of a given problem (just a text file) they give you looks like this:

n K
v_0 w_0
v_1 w_1
...
v_n-1 w_n-1

n is the number of items there are, and K is the capacity of your bag in this problem. v and w are the value and weight of each item. (I’ll refer to the items by their line number, starting with 0, in the following.)

Here’s where the “discrete-iness” comes into play. If you were allowed to take non-whole pieces of the items, the solution would be trivial: you would calculate the density (d = v/w) of each item, and start taking the most (densely) valuable items in decreasing order. When you get to the point where your bag can’t take the next whole item, you take as much of it as you can fit, and this is the theoretically most valuable set.

Now, for the actual problem (where you can’t take fractional pieces of items) it turns out that this is still often a decent solution, for intuitive reasons; denser items are giving you more bang for your buck. This is called the “greedy solution”, because it just greedily takes the best item it can at any moment. However, the problem gets interesting when that strategy will actually give you sub-optimal results. In fact, helpfully, the simplest problem they give you already has a non-greedy optimal solution (OS):

4 11
8 4
10 5
15 8
4 3

The items already happen to be ordered by decreasing density. If you did the greedy solution and took item 0 (8, 4) and then item 1 (10, 5), you couldn’t take any more items and your total value would be 18. However, this isn’t OS! You can see that you actually wanna take the least two densest items, 2 and 3, which perfectly fill the bag and also add up to 19, the OS.

So, that’s the problem! Now, how do you actually solve it? (And, what does it mean to solve it?)

The most naive way you can imagine is to try every possible set, and take the largest valid (i.e., all the items fit in the bag) solution. You have a list of n items, so for each one, you can either take it (1) or leave it (0). So there are really only (hah!) 2^n possible sets of items. Naturally, this lends itself to a tree data structure, but it should also be clear that 2^n quickly becomes infeasible for surprisingly low values of n.

Therefore, the real game is in traversing that tree in some clever way such that you don’t have to go through the whole tree. I should make a distinction here. There are two types of solution you can solve for. You can go for the OS, where you’re 100% sure the solution you end up with is optimal. This actually doesn’t necessitate going through the whole tree. Sometimes you can know that you don’t need to go farther down a tree, which allows you to not have to look at the solutions contained within that part of the tree, but still know that a solution you find will be the OS. On the other hand, sometimes this is still infeasible, so you have to go for an approximate solution (AS), where you’re just doing the best you can. However, not requiring the solution you get to be the OS can let you find great solutions way faster, and still sometimes even let you find the OS. The downside is that you won’t know 100% that you got the OS.

So let’s actually start. The way my friends and I originally did it (and I stuck with), we actually don’t explicitly have a tree data structure; however, by dint of doing recursion with two possible recursive calls, this implicitly forms a tree.

The first thing done is reading in the data and storing it in a list of namedtuple Item objects, which I believe was given to us. Then, because we’re going to go through the tree in order of decreasing density, we sort the list by that:

def sortListDensity(inList): return(sorted(inList, key=lambda item:-item.density)) lines = input_data.split('\n') firstLine = lines[0].split() item_count = int(firstLine[0]) capacity = int(firstLine[1]) items = [] for i in range(1, item_count+1): line = lines[i] parts = line.split() items.append(Item(i-1, int(parts[0]), int(parts[1]),float(parts[0])/float(parts[1]))) sortedList = sortListDensity(items) read more

Fruits of south east asia!

I forget how I found it, but for some reason I stumbled upon this page of Thai (or more generally, south east Asian) fruits. A bunch of them are really obvious ones (mango, banana, coconut…), but a handful of them are ones I’ve never even heard of. Naturally, I have to try them all. read more

Reading a book in one hour!

When you’re unemployed, you get to do all sorts of things that, if you had a job, you’d correctly judge as stupid, and then not do. Here’s one of them!

I was curious as to how much information I can pick up in an hour. I mean, I’ve gone to lots of talks, but I think a lot of the time it’s because they’re pretty specific, advanced topics (I mean, they’re usually talks about someone’s research). So I don’t think they’re necessarily the best metric for that. Part of why I’m curious is that there’s gotta be some sort of “information retention vs time spent learning it” curve, and I don’t really have a great grasp of the shape of it. I mean, I’m pretty sure that it’s monotonically increasing with time, but I really don’t know where the best ratio of it is. read more

Grouping IMDb top movies by runtime

Howdy!

This is a fun lil one. For an upcoming article, I need to know a list of (hopefully good) movies I haven’t yet seen, with similar runtimes. Now, I could have just scrolled down the list of IMDb.com’s top 250 movies, ctrl + clicking on the ones I haven’t seen, and then compared them by eye, because, to be honest, I think I’ve seen many (/most?) of them (we’ll see shortly).

But of course, that would be an efficient use of my time, and I’m learning pandas these days anyway, so why not use it!

To do what I want, I basically need to take that top 250 list (let’s say I don’t really care about the ratings within that list, I just want to select movies from it), get a column with the runtimes, and then manually make a column with 0/1 entries for if I haven’t/have seen it, and then select for the ones I have. Then, I could group, or at least visualize them.

The first (tiny) hurdle comes from the data source. IMDb.com very helpfully provides several zipped files of ALL their movies (beware, the “basics” one is 420MB unzipped!), buuut… they have separate files for the table with the runtimes (title.basics.tsv.gz) and the ratings (title.ratings.tsv.gz). That wouldn’t be too bad, you might think: you could just sort the ratings file by the rating column, take those entries, and select from the basics file, to get the runtimes.

Buuuut… a quick glance (or if you’ve ever just perused the dark back alleys of IMDb itself) reveals that there are many entries with very high ratings, higher than the top 250 scores (which range from 8.0 to 9.2). This is probably because there are lots of smaller productions where you get a selection bias, such that you pretty much only get people who really like the movie voting, so they’re all voting 10. Of course, IMDb links actually links to an explanation of this effect:

As indicated at the Top Rated Movies page, only votes from regular IMDb voters are considered when creating the top 250 out of the full voting database. This explains any difference between the vote averages reported in the top 250 lists and those on the individual title pages. This also explains why movies or shows you might think from their averages ought to appear on the list yet do not actually appear there.

To maintain the effectiveness of the top 250 lists, we deliberately do not disclose the criteria used for a person to be counted as a regular voter.

and says on the top 250 page itself:

The list is ranked by a formula which includes the number of ratings each movie received from users, and value of ratings received from regular users
To be included on the list, a movie must receive ratings from at least 25000 users

I assumed this before starting this thing but wasn’t sure how they did it. I always assumed that the simplest way would be to just have some threshold of a minimum number of votes (which they do), to even qualify. The “small, religiously devoted fanbase theory” of those artificially high ratings would probably break if you set it correctly — I mean, once you set the threshold of minimum votes high enough, if the rating is still high, it’s not really “artificial” anymore, is it? There are potential problems with that, like only selecting for really large productions (depending on the threshold). It appears they actually do this, but also add a secret “special sauce” where they weigh certain votes more, but they don’t say how.

Anyway, that’s a bit of a long winded way to say that it’d be hard to do what I said above, to get the runtimes of the top 250. At this point, I saw a few options: I probably could try their method manually, using the ratings file (which has average ratings and number of votes for each one), just taking the subset of movies that have a rating at and above the minimum of the top 250 list, and then thresholding those for the minimum number of votes. Maybe I’ll try this anyway, but I assumed (because they say they do something else in addition) that I might end up with another list if I did that.

Another thing I briefly considered (that, at this point, may have been much easier) would be some sort of web scraping. It’d be reaaaaaal easy (in theory) to have a script go to the link for each entry on the top 250 page (which would lead directly to the actual movie, which as we’ll see shortly, is actually a bit of a pain), and then each page has a well defined “runtime” field right below the title. I briefly debated this (and maybe I’ll try it later), but I don’t actually know how to do web scraping in python yet, so it would probably be a really hacky job on my part.

So, speaking of hack-y, here’s what I ended up doing. Everything is presented in bits because I did it in a Jupyter notebook.

To get around the “which are actually the top 250 movies” problem, I literally copied and pasted the top 250 list from the page, and pasted it into a text file, which I imported and dropped two things that ended up being garbage columns. Then I had to do a tiny bit of parsing, because using “\t” as a delimiter worked to separate the title and year, but not the rank and title. So, I had to delimit that column with the “.” after the rank #, but setting n=1, because some titles have a period in their name as well (Monsters, Inc. for example), so you wouldn’t want to chop those up into separate fields. Ahhh details!

movieRatings_df = pd.read_csv("copypasteRatings.csv","\t") display(movieRatings_df.head()) movieRatings_df = movieRatings_df.drop(["Your Rating","Unnamed: 3"],axis=1) display(movieRatings_df.head()) dotsplit = movieRatings_df["Rank & Title"].str.split(".",n=1) titleYears = pd.Series([entry[1] for entry in dotsplit]) rank = pd.Series([entry[0] for entry in dotsplit]).rename("Rank") years = pd.Series([int(title[-5:-1]) for title in titleYears]).rename("Year") titles = pd.Series([title[1:-7] for title in titleYears]).rename("Title") ratings_df = pd.concat([rank,titles,years],axis=1) ratings_df.head() read more

Stickin it to the Myan-mar

Ahhhhhhhhh, Myanmar. “You most likely know it as Myanmar, but it’ll always be Burma to me.”

While I originally planned to go to Thailand, Laos, and Cambodia, and maybe Vietnam, I really didn’t expect to go to Myanmar at all. To be honest, pretty much all I knew about it was that line from Seinfeld and that there’s currently a genocide/ethnic cleansing/refugee crisis happening with the Rohingya in the west of the country being committed by the Myanmar government (more on that later). However, I kept meeting people who told me that it was the highlight of their whole trip in SEA. When I had a few weeks to kill before meeting my friends in Vietnam, since I had kind of tired of Cambodia, it seemed like the perfect opportunity. read more

Yes we Cam..bodia

The 4000 Islands are in the very south of Laos, where it borders Cambodia, so Cambodia was naturally my next stop. We boarded a bus to Siem Reap and the SEA nonsense quickly began. We were herded into two minivans for the border, which was only about 15 minutes away. These minivans actually weren’t too bad, but of course they weren’t the ones we’d actually be riding in for the long haul (noticing a pattern here: nice transport for the beginning, then switch to awful transport…maybe because if people saw what they paid for before they got on, they’d demand money back or something?). The border was a typical hilarious chaotic shitshow of nervously handing over our passports, short barked orders, and not knowing what’s going on. The entrance visa for Cambodia was $37 (for the US). I’d heard that you should only hand over exact change, because if you handed over $40 for example, they’d probably invent some “fee” so you didn’t get money back (welcome to Scambodia, as I’ve heard it called). Strangely, after getting our visas, they kinda just pointed us in a direction to walk, and we walked for a couple minutes through these empty parking lots towards a bunch of stores (the “pickup spot” I guess) that probably should have been a lot closer to the crossing point… That was a little strange, and probably an accurate introduction to Cambodia. read more

Vientiane to the 4000 Islands, the La(o)st of Laos

Hey there again! I guess last time I left off, I was about to leave Vang Vieng to head farther south in Laos, by way of Vientiane, first. The main goal was to do two motorbike loops in central and south Laos, but I’ll get to that later. If you want to go south, especially by common bus routes, you’ll almost certainly end up going through the capital, Vientiane. I had heard pretty dismal stuff about it, but figured I’d give it about a day’s worth of attention, which I think was a good choice. read more

Hoo boy, I’ve fallen behind: Luang Prabang to Vientiane

Welp, despite my promise to do this more frequently, here we are. Two months in and number three. Womp womp.

Let’s see, I guess I left off in Luang Prabang…

The next day was a big one. My two friends and I agreed to get up early to see the sun rise from Mount Phousi, which is a huge hill in the middle of Luang Prabang with a temple on top. I would’ve liked to see sunset there too, but one of my friends had gone the day before and said that it was a shitshow mob of tourists, and that you see more of a see of cellphones than the sun itself. However, get up at 5AM and…you’ll definitely have less company up there. So we did that, and were climbing the hill around 5.50AM or so. We actually passed some people and monks coming back from the giving of the alms ceremony thing they do on the streets in many Laos cities. Getting up that early to do pretty much anything always feels pretty cool. You can be doing something fairly mundane but it’ll feel like a mission because you’re awake when it’s dark, but on the “other side” of the day. But it felt especially cool climbing up these steep stairs through the trees before sunset. Anyway, we finally got there, panting from being out of shape travelers, and it was already getting light enough to make out the city. There were only maybe…2 or 3? other people already there, which was cool. We all sat in mostly silence, happily watching it get lighter, and occasionally whispering to each other. However, soon enough, a fairly large group of tourists came up talking pretty loudly and even shouting to each other. A girl, one of the ones who had been up there before us, noticeably winced and then moved away to a different area with a worse view, to get away from them. I…kind of don’t get it. I get if they want to talk to their friends, but they can still do that quietly, right? Anyway, enough kvetching. read more