RPi camera, part 3: a few incremental fixes

Round 3! Okay, this is where I try and polish it up in a couple ways.

Here are the things I said last time I needed to make better:

  • Send pics more conventionally
  • Fix detection sensitivity (still often picking up strong shade/sunlight quirks)
  • Total design flaw: since the log file currently gets sent with each picture, but is updated when each picture is written, it is actually sometimes more updated than the pics in the folder. That is, if 30 pictures are created by the camera function, and those are immediately added to the log file, the log file is sent with the first of those 30, and it contains all 30 of them even though only one has been sent
  • Better image classifier architecture
  • Better labeled image dataset (32×32 is tiny)
  • Sliding windows over detected images

Sliding windows

What is “sliding windows”? It’s a pretty simple idea. Here’s the motivating problem, first. When my convnet looks at an image I give it and tries to guess whether it has a car in it, it was trained on images where the car mostly filled the frame, so it will tend to only correctly classify images where the car also mostly fills the frame. If you have an image (like the one below) where there is a car but it’s mostly in one corner, it may not get it (because it’s looking for features bigger than that. There are a couple other effects too. One is that if we’re only classifying 32×32 square images, then what I’ve been doing so far (resizing the image to have the smaller side be 32 and then squeezing the bigger side to also be 32) will distort the image, which will make it harder to classify. Lastly, you can imagine that if the actual image size we’re giving it is something like 256×512, then even if it actually would have correctly classified it given these other problems, by the time it smooshes it down to 32×32, it might not have the resolution in the car region of the image to do it.

So what can fix a lot of these problems? You define a “window”, a subset of the image, and “slide” it over the image in steps. Then, you pass this window subset to the classifier, so it’s actually only classifying the subset. You might choose a stride for the sliding window such that you get M windows in the vertical dimension and N windows in the horizontal. So you’re really doing MxN total classifications for the whole window, and then if one of them says “car!”, you say that the image contains a car.

Here’s a little illustration of mine, where the red grid over the green outlined window shows the windows being used (it’s a little hard to tell them apart, but they’re squares. There are three in the vertical direction):

There are of course a million little quirks and variants and choices to make with this. For example, I think it’s common to choose two sizes for the window, which should let you look at two different “scales”. Also, you have to choose some balance between having more sub windows and the computation time it will take to actually process them. I’m also pretty sure some convnets can have this built in by choosing different filter sizes (like, one that would group a block of pixels as a single pixel to make a larger “window”).

Anyway, how does it work? Here are the results using my CIFAR-10 trained convnet from last time, on the same little group of detected images. I show the certainty distribution, which is the certainty that it thinks it detects a car.

No sliding windows:

Detected:

Not detected:

Sliding windows:

Detected:

Not detected:

Definitely better! But still getting a ton of false positives, which is annoying. Honestly it may be because they’re 32×32.

Fixing image sending

So I had a bit of a mystery on my hands. I was finding that after a while, my program was just…stopping. Not crashing, not giving any error, just stopping after about an hour. What I mean by stopping is that, while normally it outputs a line for each image it detects (on the RPi side), it would stop outputting anything, but not stop running. It took me embarrassingly long to figure out, but here’s what I did. I first made a Debug class that basically logs everything the program does, right at the moment of doing it. This is actually a pretty handy thing to have around anyway, and basically doesn’t slow it down. You’ll notice that I’m periodically logging the CPU/Mem/temp, since I read somewhere that that can cause a problem, but all the values I’ve seen on mine are fine. Anyway, here was the first clue, you can see where it stops, after about an hour:

So you can see that it’s saving them steadily for a while, and then stops saving them, but continues to send them. Welp, you probably guessed before I did, but while I was aware of how little space my RPi had on it (~700MB to play with), I thought that because I was removing the files right after sending them, I’d be okay. Howeverrrr:

So I was running out of space!

One thing I did was immediately get a 32GB micro SD card and clone my RPi onto it, just to have a bit more wiggle room. To be honest, that might solve the problem, since I doubt I’d ever keep the program running long enough to generate that much data, but that would be not addressing the real problem here, which is that my files are sending way too heckin’ slow!

My files are usually ~100kB, which should be easy to send and keep up with, even if something like 10 a second are being produced. For example, I know off the top of my head that when I send files via scp between my desktop and RPi, the transfer rate it shows is usually something like 1.5 MB/s. So what’s going on?

It turns out that that “S” that stands for “secure” in SCP (or SSH, which it’s based on) is pretty important! As they discuss in this thread where it seems like the person was having exactly my problem, there’s actually some pretty nasty overhead involved in encrypting the file you’re going to send. Of course, I don’t care about that! I’m just sending stuff I don’t care about over my LAN.

So one option in that thread was using a weaker cipher, while another was to use the rcp command, which is kind of like a totally unencrypted scp. I’m going to do a little diversion for a minute here because I wanted to know just how much these compared.

What I did was create a few dummy files, smallfile.txt (100 kB), mediumfile.txt (1 MB), and bigfile.txt (5 MB). First I just sent smallfile.txt 10 times to get a rough sense of the speed and overhead:

for i in range(10): file = files[0] print('processing file',file) start = time() subprocess.check_call(['scp',file,remoteHostPath]) total = time()-start print('time elapsed:',total) times.append(total) print('done') print(times) read more

Fun with Genetic Algorithms and the N Queens Problem

Genetic Algorithms are cool!

I was recently skimming through Russel and Norvig’s AI: A Modern Approach and came to the section on Genetic Algorithms (GA). Briefly, they’re a type of algorithm inspired by genetics and evolution, in which you have a problem you’d like to solve and some initial attempts at solutions to the problem, and you combine those solutions (and randomly alter them slightly) to hopefully produce better solutions. It’s cool for several reasons, but one really cool one is that they’re often used to “evolve” to an optimal solution in things like design of objects (see the antenna in the Wikipedia article). So, that’s kind of doing evolution on objects rather than living things. Just take a look at the applications they’re used for.

A lot of it makes more sense when you look at it in the context of evolution, because it’s a pretty decent analogy. A GA “solution attempt” I mentioned above is called an “individual”, like the individual of a species. It could be in many different formats (a string, a 2D array, etc), depending on the problem. You call the group of individuals you currently have the “population”. For species in nature, the “problem” they’re trying to solve is finding the best way to survive and pass on their genes, given the environment. For real evolution and genetics, the success of an individual is somewhat binary: does it pass on its genes, or not? (I guess you could actually also consider that there are grades of passing on your genes; i.e., it might be better to have 10 offspring than 1.) For GA, the success is measured by a “fitness function”, which is a heuristic that you have to choose, depending on the problem.

For each generation, you have a population of different individuals. To continue the analogy, real species mate, and create offspring with combined/mixed genes. In GA we do something called “crossover”, in which the attributes of two individuals are mixed to produce another individual. Similarly, we introduce “mutations” to this new individual, where we slightly change some attribute of them. This is actually very important (see evidence below!), because it allows new qualities to be introduced to the population, which you wouldn’t necessarily get if you just mixed together the current population repeatedly (exactly analogous with real evolution).

So, that’s the rough idea: you have a population, you have them “mate” in some aspect to produce new individuals, you mutate those new ones a little, and now apply your fitness function (i.e., how good is this individual?), and keep some subset of this new population. You could keep the whole population if you wanted to — but the number would quickly grow so large that you’d basically just be doing the same thing as a brute force search over all possible individuals.

I was aware of GA already, but had never actually implemented one. The example they use in AIMA was the 8 Queens problem (8QP), which is a fun little puzzle. Embarrassingly, for a chess player, I had never actually solved it! So I thought I’d dip my toe into GA and do it, and also maybe characterize it a little.

So, let’s set it up! An “individual” here is obviously going to be a board state (i.e., where all the queens are). Most naively, you might think that means a board state is an 8×8 2D array, with 0’s for “no queen on this spot” vs 1 for “queen here”. But, if you look at the 8QP for a second, you’ll quickly see that they each have to be on a different row, and different column. Therefore, you can really specify a board by an 8-long array of integers, where each index/entry represents a row, and the value of that entry is the column number than queen is in. So it’s automatically constraining them to be in different rows, and it makes the coding a lot simpler.

What’s the fitness function (FF)? You want some way of deciding how good a board is. A pretty obvious one for this problem is the number of pairs of queens that are attacking each other, for a given board state. If you solve the problem, FF = 0. So for this problem, you want a lower FF.

Here, crossover is combining two boards. To do this, we just choose a row number, and then split the two parents at that index, and create two new boards by swapping the sides. It’s probably more easily explained in the code:

def mate(self,board1,board2): board1 = deepcopy(board1) board2 = deepcopy(board2) crossover = randint(1,board1.board_size-1) temp = board1.board_state[:crossover] board1.board_state[:crossover] = board2.board_state[:crossover] board2.board_state[:crossover] = temp return(board1,board2) read more

EDX Artificial Intelligence, week 4

Week 4 is where it gets really good. Week 3 was cool because it got into heuristic search, which is the start of what feels like a glimmer of “intelligence”, but week 4 is on adversarial search and games. Hot damn that’s cool. Additionally (skip to the bottom if you’re only interested in that), the project for the week was to make an AI that plays the game 2048!

Theory

Adversarial search problems are basically the same as a game (a certain type, anyway). The idea is that there’s an opponent, whom we can’t control, and they’re planning against us! So the main difference between a game and a typical search is that the optimal solution for a game isn’t a sequence of actions, but a “policy” for what to do in response to the opponent’s actions. Fortunately, they can still be very well modeled by search trees if you know how.

You can divide games up into the cross of two categories, each with two possibilities: information you see (full vs non-full) and determinism (fully vs some randomness), which will change exactly how you play the game. We’re mostly going to look at fully available information, deterministic games. Further, we’re also going to look at zero-sum games, where by definition a good thing for your opponent is bad thing for you, and the amount of “goodness” (i.e., points) in the game is conserved.

A big part of what we do here is called “embedded thinking”. This means that, to decide what choice to make, you’ll need to consider what it will allow your opponent to do, but of course what he does (to win) will need to consider what you’ll do. So you end up going back and forth (thinking alternately as yourself and your opponent) to decide your best move. To illustrate the following, we’re gonna consider two players, Max (who tries to maximize the resulting score) and Min (who…tries to minimize it).

The first search tactic we look at is the minimax algorithm. This is a recursive algorithm (but in a slightly indirect way, like I’ll explain below) that starts at the top of the game tree and tries to figure out the choice that should be made to maximize Max’s possible score, given that at each choice Min can make, Min will try to minimize the score (it’s really solving for Min as well, because you just take the opposite choice to get the correct plays for Min). It works like this. There are two pretty similar functions, Minimize() and Maximize(). Maximize is called on the top, first node (because we’re trying to maximize the tree). Within Maximize, if it’s a terminal node (game over), it returns the value of that node. If it’s not, it calls Minimize on each of the child nodes, and then, if any of those returns from Minimize are bigger than the best value achieved so far (which starts off as negative infinity in each function call so at least one child gets chosen), that node is selected and that value becomes the new best value. Finally, it returns whatever is the best node and value after calling Minimize on each of the children. What does Minimize do? Well, nearly the exact same, but with the sign switched at each place. It calls Maximize on each of its children, and tries to find the minimum value of them (starting with a minimum value of positive infinity).

So, the algorithm is recursive, but in a “flip flop” way, where Maximize() calls Minimize(), which then calls Maximize(), etc, until a terminal node is found (as opposed to what I’ve usually seen recursive algorithms do, just call themselves).

This is actually pretty familiar to anyone who’s played chess (or lots of strategy games, really). When considering a move to make, you then put yourself in your opponent’s shoes and think, “what move would I make if I were them?” Of course, you don’t always get to complete it to “terminal nodes” (i.e., checkmate), so you just kind of say “okay, 5 moves down the line, if we both do our best that I can see, this looks like it maximizes the game for me” (and of course sometimes you didn’t look far enough, and there are surprises!). But what you’re really doing is some version of the minimax algorithm!

So what can you do when you can’t really get to a terminal node? They offer 3 strategies:

  1. Use an evaluation function for non terminal nodes. That is, only go D layers deep, but have some function that tells you how good you’re doing at that node when you stop.
  2. Use Iterative Deepening Search (IDS). You basically limit the depth, keep re-calculating it for deeper depths, and then take the best result you got when you had to stop.
  3. Prune, which means cut out big chunks of the tree when you either know or suspect that it won’t give you better results.

They focus on pruning first, and specifically, a famous algorithm called alpha-beta pruning. The basic idea of pruning is that if you’re smart, and keep track of the right stuff while traversing the tree, sometimes you can skip whole sections of the tree because you realize they just can’t do better than what you’ve already done. So this is pruning generally, but alpha-beta pruning is more specifically about passing the best values for max (alpha) and min (beta) down the tree, and the propagating new discoveries back up the tree.

The specifics can actually get a little confusing, so I’ll try and explain it here for my own knowledge. The first thing to note is that alpha and beta are values at each (expanded) node, not global variables. Similar to minimax, there will be a “flip flop” recursion between Maximize and Minimize. The difference here is that each time one of them is called, they’re called with an alpha and beta value. The first time, Maximize is called on the root node with alpha = -infinity and beta = +infinity. At the beginning of Maximize (everything will be the same for Minimize, but backwards), the (best node, best value) combo for that node’s expansion is set to (Null, -infinity). Then, they go through a for loop of each of the children. For each child, Minimize() is called on it, and the resulting value is gotten. If this value is better than the best value, that value becomes the new best value. So, the first time this happens, anything will be bigger than -infinity, so it will be the new best (biggest) value. Then, this best value (whether or not it got updated) is compared to beta (the one passed to Maximize). If it’s bigger, that means that Minimize will certainly choose that beta, which is worse (for Max) than the best Max can already achieve on another branch. Therefore, we can ignore the rest of that branch, and prune.

There’s also a brief section on move ordering, which is the order of the nodes you evaluate. If alpha-beta pruning potentially allows you to not have to evaluate the rest of a node’s children, if evaluation of one of them gives you alpha > beta, you can imagine that you’d want to evaluate that node first.

Project

Alright, now for the actual project! The cool part. The goal here is to make an AI that plays 2048. There are a few constraints. You only get 0.2 second per move, so you have to cut off your search of the game tree after that time and just return what you’ve found for your best move. Additionally, typically in the minimax algorithm, you assume your opponent (if you’re Max) wants to minimize, that is, give you the worst possible move. However, in 2048, the 2 or 4 tile that’s placed each turn is random. So there’s a good chance that you could get better scores if you programmed your AI with this in mind (i.e., if you make a move where, if the computer was playing randomly as it actually does, and there’s a 1% chance it would make the move that’s worst for you, but that other 99% is great for you, that would actually be a pretty great move to make). However, the instructions say to program it as though the game is actively playing against you, giving you the worst move at each point.

I mentioned above what you have to do when you can’t get to a terminal mode. So far they showed you trees that seemed to assume that you could see the outcomes of Max and Min choosing their best moves, and finishing the game to completion or something. I’m not sure you even could theoretically (given infinite calculation time) get all the ending game states for 2048, because it seems possible that there are some rare branches of the tree where the game could continue indefinitely (actually, is this true? If you could also place tiles yourself, and choose the moves, could the game go on forever? I’m not sure, because it seems like as the game goes on, to get higher numbers, you need to have the lower ones that build up to them present as well, which would clog the board up (edit: actually, see below for an example of this)).

Anyway, the point is that when you can’t reach terminal states in the game tree (that is, the game doesn’t end, giving you a “definite vaule” for that state), you basically have to make up your own value for a state. And that’s a heuristic, in a sense! It turns out choice of heuristic is very important and the hard problem of this assignment. At each node, you’ll calculate the heuristic for that game state, which will let you make choices.

They suggest starting by just implementing a-b pruning with the heuristic of “number of free cells”, which should let you get some combo of 256’s and 512’s if you run it repeatedly (the way they score it is to run your program 10 times and take the 5 best scores). This actually isn’t a bad heuristic, but could use a little tweaking. In general, you probably do want more open cells, and that also encourages combining them, which will tend to give you higher number tiles. So, I implemented this, and it indeed gave me a bunch of 256’s and 512’s. I also implemented IDS to start, so for a given move, I have a “depth limit” to search in the tree, which I increase, until I run out of time. That still had me at the same scores.

Okay, to the code. I’m mostly going to focus on what I did for the heuristic here, because that’s the interesting part, and honestly, I did the main parts in Minimize() and Maximize() pretty ugly.

The simplest, the starting heuristic, was just:

def heuristic(self,grid):
        #For the heuristic, higher is better for the playerAI.
        N_free_cells = len(grid.getAvailableCells())
        return(N_free_cells)

But now let’s try to get better. I don’t know a ton about 2048 but the one thing I did know was that you want your highest value in the corner. To do that, I added another piece to the heuristic function, that just adds the value of the tile in the corner to the heuristic, still keeping the N_free_cells thing:

def heuristic(self,grid): #For the heuristic, higher is better for the playerAI. N_free_cells = len(grid.getAvailableCells()) corner_weight_val = grid.getCellValue((0,0)) return(self.weight1*N_free_cells + self.weight2*corner_weight_val) read more

The Knapsack Problem: Discrete Optimization, week 2

I’ve been doing this Coursera Discrete Optimization course with my friends. It’s a lot of fun and I’m learning a bunch. The instructor is a total goofball too, which is a plus. I’ve taken a handful of online courses before, but to be honest, the assignments have usually been pretty easy. Not so with this Discrete Optimization (DO (my initials!)) course! Each week, you have to solve 6 problems, and each is graded out of 10 depending on how well you do. I believe the breakdown is: 0/10 if your answer doesn’t even match the output format required, 3/10 if you do basically anything (even if your answer is quite wrong), 7/10 if you have an answer above some threshold but still not perfect, and 10/10 if your answer is optimal (I guess they know the optimal solution for all of them?). Usually, the problems increase in hardness throughout the set; often, the last one is difficult enough that (I believe we saw this said in the course forums by the instructor) it would be a challenge for people who specialize in DO for a living. I think that’s pretty cool! They usually give you a ton of practice problems of various difficulties, and (though I’m not 100% sure) I think the 6 you’re graded on are usually among those.

So what is DO? I certainly didn’t know when I started this course, though I guess I should’ve been able to guess. Optimization is what it sounds like, finding the best solution you can for given problems. The “discrete” part is that the quantities involved are integers or discrete (that’s the name in the title!!!) components. It turns out I had actually heard of many of the problems that DO applies to before, but didn’t know they were DO. I had heard most of them in the context of P vs NP complexity.

Anyway, enough of that. Week 1 didn’t have an assignment, so the first one was in week 2, and it was the famous “knapsack problem”. I’m guessing if you’re reading this (sike, no one reads this!), you’re familiar with it, but in case you aren’t, it’s as follows. You have a knapsack with a certain (weight) capacity, K. There are a bunch of items in front of you, and each one has a certain weight (w) and value (v). You wanna fill your backpack with the most valuable (i.e., largest combined value) set of items you can, and you can only take the items whole (i.e., you can’t break one in half to fill the bag completely).

Because I’ll be referring to it a few times, the format of a given problem (just a text file) they give you looks like this:

n K
v_0 w_0
v_1 w_1
...
v_n-1 w_n-1

n is the number of items there are, and K is the capacity of your bag in this problem. v and w are the value and weight of each item. (I’ll refer to the items by their line number, starting with 0, in the following.)

Here’s where the “discrete-iness” comes into play. If you were allowed to take non-whole pieces of the items, the solution would be trivial: you would calculate the density (d = v/w) of each item, and start taking the most (densely) valuable items in decreasing order. When you get to the point where your bag can’t take the next whole item, you take as much of it as you can fit, and this is the theoretically most valuable set.

Now, for the actual problem (where you can’t take fractional pieces of items) it turns out that this is still often a decent solution, for intuitive reasons; denser items are giving you more bang for your buck. This is called the “greedy solution”, because it just greedily takes the best item it can at any moment. However, the problem gets interesting when that strategy will actually give you sub-optimal results. In fact, helpfully, the simplest problem they give you already has a non-greedy optimal solution (OS):

4 11
8 4
10 5
15 8
4 3

The items already happen to be ordered by decreasing density. If you did the greedy solution and took item 0 (8, 4) and then item 1 (10, 5), you couldn’t take any more items and your total value would be 18. However, this isn’t OS! You can see that you actually wanna take the least two densest items, 2 and 3, which perfectly fill the bag and also add up to 19, the OS.

So, that’s the problem! Now, how do you actually solve it? (And, what does it mean to solve it?)

The most naive way you can imagine is to try every possible set, and take the largest valid (i.e., all the items fit in the bag) solution. You have a list of n items, so for each one, you can either take it (1) or leave it (0). So there are really only (hah!) 2^n possible sets of items. Naturally, this lends itself to a tree data structure, but it should also be clear that 2^n quickly becomes infeasible for surprisingly low values of n.

Therefore, the real game is in traversing that tree in some clever way such that you don’t have to go through the whole tree. I should make a distinction here. There are two types of solution you can solve for. You can go for the OS, where you’re 100% sure the solution you end up with is optimal. This actually doesn’t necessitate going through the whole tree. Sometimes you can know that you don’t need to go farther down a tree, which allows you to not have to look at the solutions contained within that part of the tree, but still know that a solution you find will be the OS. On the other hand, sometimes this is still infeasible, so you have to go for an approximate solution (AS), where you’re just doing the best you can. However, not requiring the solution you get to be the OS can let you find great solutions way faster, and still sometimes even let you find the OS. The downside is that you won’t know 100% that you got the OS.

So let’s actually start. The way my friends and I originally did it (and I stuck with), we actually don’t explicitly have a tree data structure; however, by dint of doing recursion with two possible recursive calls, this implicitly forms a tree.

The first thing done is reading in the data and storing it in a list of namedtuple Item objects, which I believe was given to us. Then, because we’re going to go through the tree in order of decreasing density, we sort the list by that:

def sortListDensity(inList): return(sorted(inList, key=lambda item:-item.density)) lines = input_data.split('\n') firstLine = lines[0].split() item_count = int(firstLine[0]) capacity = int(firstLine[1]) items = [] for i in range(1, item_count+1): line = lines[i] parts = line.split() items.append(Item(i-1, int(parts[0]), int(parts[1]),float(parts[0])/float(parts[1]))) sortedList = sortListDensity(items) read more