RPi camera, part 3: a few incremental fixes

Round 3! Okay, this is where I try and polish it up in a couple ways.

Here are the things I said last time I needed to make better:

  • Send pics more conventionally
  • Fix detection sensitivity (still often picking up strong shade/sunlight quirks)
  • Total design flaw: since the log file currently gets sent with each picture, but is updated when each picture is written, it is actually sometimes more updated than the pics in the folder. That is, if 30 pictures are created by the camera function, and those are immediately added to the log file, the log file is sent with the first of those 30, and it contains all 30 of them even though only one has been sent
  • Better image classifier architecture
  • Better labeled image dataset (32×32 is tiny)
  • Sliding windows over detected images

Sliding windows

What is “sliding windows”? It’s a pretty simple idea. Here’s the motivating problem, first. When my convnet looks at an image I give it and tries to guess whether it has a car in it, it was trained on images where the car mostly filled the frame, so it will tend to only correctly classify images where the car also mostly fills the frame. If you have an image (like the one below) where there is a car but it’s mostly in one corner, it may not get it (because it’s looking for features bigger than that. There are a couple other effects too. One is that if we’re only classifying 32×32 square images, then what I’ve been doing so far (resizing the image to have the smaller side be 32 and then squeezing the bigger side to also be 32) will distort the image, which will make it harder to classify. Lastly, you can imagine that if the actual image size we’re giving it is something like 256×512, then even if it actually would have correctly classified it given these other problems, by the time it smooshes it down to 32×32, it might not have the resolution in the car region of the image to do it.

So what can fix a lot of these problems? You define a “window”, a subset of the image, and “slide” it over the image in steps. Then, you pass this window subset to the classifier, so it’s actually only classifying the subset. You might choose a stride for the sliding window such that you get M windows in the vertical dimension and N windows in the horizontal. So you’re really doing MxN total classifications for the whole window, and then if one of them says “car!”, you say that the image contains a car.

Here’s a little illustration of mine, where the red grid over the green outlined window shows the windows being used (it’s a little hard to tell them apart, but they’re squares. There are three in the vertical direction):

There are of course a million little quirks and variants and choices to make with this. For example, I think it’s common to choose two sizes for the window, which should let you look at two different “scales”. Also, you have to choose some balance between having more sub windows and the computation time it will take to actually process them. I’m also pretty sure some convnets can have this built in by choosing different filter sizes (like, one that would group a block of pixels as a single pixel to make a larger “window”).

Anyway, how does it work? Here are the results using my CIFAR-10 trained convnet from last time, on the same little group of detected images. I show the certainty distribution, which is the certainty that it thinks it detects a car.

No sliding windows:

Detected:

Not detected:

Sliding windows:

Detected:

Not detected:

Definitely better! But still getting a ton of false positives, which is annoying. Honestly it may be because they’re 32×32.

Fixing image sending

So I had a bit of a mystery on my hands. I was finding that after a while, my program was just…stopping. Not crashing, not giving any error, just stopping after about an hour. What I mean by stopping is that, while normally it outputs a line for each image it detects (on the RPi side), it would stop outputting anything, but not stop running. It took me embarrassingly long to figure out, but here’s what I did. I first made a Debug class that basically logs everything the program does, right at the moment of doing it. This is actually a pretty handy thing to have around anyway, and basically doesn’t slow it down. You’ll notice that I’m periodically logging the CPU/Mem/temp, since I read somewhere that that can cause a problem, but all the values I’ve seen on mine are fine. Anyway, here was the first clue, you can see where it stops, after about an hour:

So you can see that it’s saving them steadily for a while, and then stops saving them, but continues to send them. Welp, you probably guessed before I did, but while I was aware of how little space my RPi had on it (~700MB to play with), I thought that because I was removing the files right after sending them, I’d be okay. Howeverrrr:

So I was running out of space!

One thing I did was immediately get a 32GB micro SD card and clone my RPi onto it, just to have a bit more wiggle room. To be honest, that might solve the problem, since I doubt I’d ever keep the program running long enough to generate that much data, but that would be not addressing the real problem here, which is that my files are sending way too heckin’ slow!

My files are usually ~100kB, which should be easy to send and keep up with, even if something like 10 a second are being produced. For example, I know off the top of my head that when I send files via scp between my desktop and RPi, the transfer rate it shows is usually something like 1.5 MB/s. So what’s going on?

It turns out that that “S” that stands for “secure” in SCP (or SSH, which it’s based on) is pretty important! As they discuss in this thread where it seems like the person was having exactly my problem, there’s actually some pretty nasty overhead involved in encrypting the file you’re going to send. Of course, I don’t care about that! I’m just sending stuff I don’t care about over my LAN.

So one option in that thread was using a weaker cipher, while another was to use the rcp command, which is kind of like a totally unencrypted scp. I’m going to do a little diversion for a minute here because I wanted to know just how much these compared.

What I did was create a few dummy files, smallfile.txt (100 kB), mediumfile.txt (1 MB), and bigfile.txt (5 MB). First I just sent smallfile.txt 10 times to get a rough sense of the speed and overhead:

for i in range(10): file = files[0] print('processing file',file) start = time() subprocess.check_call(['scp',file,remoteHostPath]) total = time()-start print('time elapsed:',total) times.append(total) print('done') print(times) read more

Fun with Genetic Algorithms and the N Queens Problem

Genetic Algorithms are cool!

I was recently skimming through Russel and Norvig’s AI: A Modern Approach and came to the section on Genetic Algorithms (GA). Briefly, they’re a type of algorithm inspired by genetics and evolution, in which you have a problem you’d like to solve and some initial attempts at solutions to the problem, and you combine those solutions (and randomly alter them slightly) to hopefully produce better solutions. It’s cool for several reasons, but one really cool one is that they’re often used to “evolve” to an optimal solution in things like design of objects (see the antenna in the Wikipedia article). So, that’s kind of doing evolution on objects rather than living things. Just take a look at the applications they’re used for.

A lot of it makes more sense when you look at it in the context of evolution, because it’s a pretty decent analogy. A GA “solution attempt” I mentioned above is called an “individual”, like the individual of a species. It could be in many different formats (a string, a 2D array, etc), depending on the problem. You call the group of individuals you currently have the “population”. For species in nature, the “problem” they’re trying to solve is finding the best way to survive and pass on their genes, given the environment. For real evolution and genetics, the success of an individual is somewhat binary: does it pass on its genes, or not? (I guess you could actually also consider that there are grades of passing on your genes; i.e., it might be better to have 10 offspring than 1.) For GA, the success is measured by a “fitness function”, which is a heuristic that you have to choose, depending on the problem.

For each generation, you have a population of different individuals. To continue the analogy, real species mate, and create offspring with combined/mixed genes. In GA we do something called “crossover”, in which the attributes of two individuals are mixed to produce another individual. Similarly, we introduce “mutations” to this new individual, where we slightly change some attribute of them. This is actually very important (see evidence below!), because it allows new qualities to be introduced to the population, which you wouldn’t necessarily get if you just mixed together the current population repeatedly (exactly analogous with real evolution).

So, that’s the rough idea: you have a population, you have them “mate” in some aspect to produce new individuals, you mutate those new ones a little, and now apply your fitness function (i.e., how good is this individual?), and keep some subset of this new population. You could keep the whole population if you wanted to — but the number would quickly grow so large that you’d basically just be doing the same thing as a brute force search over all possible individuals.

I was aware of GA already, but had never actually implemented one. The example they use in AIMA was the 8 Queens problem (8QP), which is a fun little puzzle. Embarrassingly, for a chess player, I had never actually solved it! So I thought I’d dip my toe into GA and do it, and also maybe characterize it a little.

So, let’s set it up! An “individual” here is obviously going to be a board state (i.e., where all the queens are). Most naively, you might think that means a board state is an 8×8 2D array, with 0’s for “no queen on this spot” vs 1 for “queen here”. But, if you look at the 8QP for a second, you’ll quickly see that they each have to be on a different row, and different column. Therefore, you can really specify a board by an 8-long array of integers, where each index/entry represents a row, and the value of that entry is the column number than queen is in. So it’s automatically constraining them to be in different rows, and it makes the coding a lot simpler.

What’s the fitness function (FF)? You want some way of deciding how good a board is. A pretty obvious one for this problem is the number of pairs of queens that are attacking each other, for a given board state. If you solve the problem, FF = 0. So for this problem, you want a lower FF.

Here, crossover is combining two boards. To do this, we just choose a row number, and then split the two parents at that index, and create two new boards by swapping the sides. It’s probably more easily explained in the code:

def mate(self,board1,board2): board1 = deepcopy(board1) board2 = deepcopy(board2) crossover = randint(1,board1.board_size-1) temp = board1.board_state[:crossover] board1.board_state[:crossover] = board2.board_state[:crossover] board2.board_state[:crossover] = temp return(board1,board2) read more

Getting back on the horse…er, Python

As of this writing, I just defended and I’m considering various options for what I’ll do next. That’s a whole other story, but the important part for this post is that, probably for whatever I do, I’ll be coding.

I’ve coded a decent amount in my life. I started with dinky web stuff wayyy back, then picked up a now-tattered and probably outdated “C++ for Dummies” book in highschool. I did small programs with that, as well as some silly things for crappy AVR projects I did. In college, I used mostly Java because that’s what the computer science classes I took asked for. Towards the end of college, though, I was working on my own research, and used C++ instead (why? I honestly don’t remember. Wait, I just did! My advisor had heard of some multiprocessor module for C++ that he wanted me to try, so that’s why I didn’t stick with Java).

I didn’t code a ton for my first couple years of grad school. When I began again, I don’t remember exactly how, but I got into using Mathematica (I think because I had seen a few people do what seemed like ~~magick~~ at the time during my first year of grad school; stuff I stupidly spent a while doing with pencil and paper).

Oh, Mathematica (M). What a blessing and a curse. Let me just briefly tout its blessings: it’s very fully featured. A common response I’ve gotten is “but you can do <<XYZ thing>> in any language!”, and that’s usually true — but it’s not always really easy, like it is with M. The documentation (with a few rare exceptions) is pretty excellent. What I (and I suspect most users) want most of all in a manual/doc page is examples. It drives me nuts when I go to the man page for a bash command, and it gives the command syntax in some general form; yeah, I can figure it out if I spend a few minutes, but why make me waste time parsing some command syntax? M gets this, and if you look at the doc for a function, there’s a really solid chance that the exact application you want is already one of the examples and you can just copy and paste.

The other thing is that, because it’s all part of a central program (aside from using user-generated packages, which I’ve almost never done), it follows the same syntax, is generally coherent, and works together. I’ve just been amazed time and time again when I’ve wanted to do something fairly complex, googled “Mathematica <<complex thing>>”, and found that there’s already a pretty fully featured set of functions for it: graph theory stuff, FEA stuff, 3D graphics, etc.

Here’s the thing: a lot of this is essentially just lauding the benefits of any huge, expensive piece of software. Almost all of the things I just said would apply equally well to something like Adobe Photoshop: thorough, well documented, easy to use, etc.

And this brings me to the curse of M: it is a great piece of software in many respects, but it’s proprietary and huge. The proprietary part is a big deal, because a company or school has to pay for a large number of potential users, and site licenses can be really expensive (I tried to find a number but all they have online is “request a quote”; anyone have a rough estimate?). So this eliminates a lot of potential users, like startups that was to be “lean” or whatever. Additionally, I’m guessing that for a company, having a ton of their framework depending solely on another company is something they’d like to avoid, if possible.

Briefly looking into a few career options (data science is the best example of this) and talking to people, I quickly realized how un-used Mathematica is outside of academia. I’m sure there are some users, but it’s definitely not something employers are looking for, from what I gather.

Data science seems like a very possible route to take, so I was looking into the most commonly used languages in it, and the consensus seems to be: Python and R. I went with Python for a few reasons: 1) a couple videos said that if you’re starting new to both (which I essentially am), go with Python, 2) to contradict that first point, I’m actually not starting totally fresh with Python; my experience with it is definitely minimal but I’ve used it a tiny bit, and 3) it seems like, and correct me if I’m wrong here, Python is used for lots of applications outside of data science/stats, such as application building, machine control, etc, whereas R isn’t (true? eh?).

So I’m getting back on the Python. I’m a fairly firm believer that the best method to learn a coding language (or maybe anything, really) is to just start using it. Pick a task, and try doing it. When you run into something you don’t know how to do, Google it.

(Obviously, this doesn’t work at extremes; at some level of ignorance of a subject you simply don’t know what questions you should be asking. But by now, I’ve used bits of enough languages to know concepts that tend to span all languages, to search for.)

The thing I’m starting with is good ol’ Project Euler. If you’re not familiar with it, it’s a series of coding challenges that start very simple and get harder. For example, they list the number of people who have successfully done each problem. The first few problems are in the several hundred thousand range; the later ones are in the ~100 range (you could argue that that’s more about most people just not being that into spending a decent amount of effort on a task with essentially no outward reward, but they actually are a lot harder). The first bunch of them are really simple, being things like string manipulation, slightly tricky sums, and little counting tasks, where you really just need to think about how you’d do it in the most naive way, and then code it (perfect for getting back into a language!)… but they quickly get devilish and far from trivial. One type they’re a fan of, when you get to the slightly trickier ones, are problems where the naive, brute force approach is obvious, but would take an impossibly long time to calculate. However, there’s a trick involved that allows it to be calculated “on any modern computer in under a minute”, I believe is their promise.

So I’ve done the first 25 or so problems using python. I’m definitely going about it in a better way than I did before, trying to use neater notation (like list comprehension rather than a for loop, when I can). I think I’ve definitely benefited from my time with Mathematica, which has a strong emphasis on that type of stuff (for example, using inline functions and the Map shorthand /@).

Overall, it’s going pretty well and I’m liking it. I remember not liking whitespace-based syntax (or whatever it’s called), but I’m finding that with even a basic text editor like Notepad++ or Atom, it’s actually pretty neat.

But of course I have a couple complaints, so let me kvetch a bit.

First, there seems to be a dearth of simple solutions for simple problems that I’d expect to be kind of common. For example, in a few PE problems, I had a list of lists of the same length (so, a matrix), that I wanted to transpose. Now, in M, you’d literally just do Transpose@mat. However, I was having trouble finding how to do it neatly in Python. Basically, the exact problem being asked about here. Now, what I’m looking for is something nice and simple like one of the answers given:

import numpy as np

a = np.array([(1,2,3), (4,5,6)])

b = a.transpose()

But unfortunately, if you notice, for the same reason the OP in that post didn’t choose that answer, your matrix has to be in a different form for np.array (with parentheses, not square brackets, as they would be for a list). Now, I could recreate the matrix into an np array, but… now we’re talking about another operation, and I’d have to also do it back that way if I wanted it in square brackets at the end. I guess I could have built it as an np array from the get go, but you might not always have the option.

The solution that works for this is:

>>> lis = [[1,2,3], ... [4,5,6], ... [7,8,9]] >>> [list(x) for x in zip(*lis)] [[1, 4, 7], [2, 5, 8], [3, 6, 9]] read more