Motion detection with the Raspberry Pi, part 2

Hi hi!

In this post, I’m really just going to concentrate on building the whole pipeline. It’s going to be rife with inefficiencies, inaccuracies, and stuff I 100% plan on fixing, but I think it’s good to get a working product, even if it’s very flawed. Someone I once worked for told me that projects in the US gov’t kind of work that way: there was high emphasis on getting a product out the door, even if it was hacky and awful (though hopefully not). I think that makes sense a lot of the time. It’s probably more motivating to see a project that does something to completion, even if it’s crappy, than a project that is partly carefully done, but still very incomplete. A crappy car is cooler than a really nice wheel. Also, it seems like iterative, smaller fixes are relatively easy.

ANYWAY, that said, last time I left off, I said that the things that needed to be done were:

  • Fix the sending thing to do in parallel
  • Make monitoring program on other side that adds the files, etc to a CSV file to be analyzed with pandas
  • Use keras with CIFAR datasets to figure out if detected object is car, person, etc
  • Attach lens to get better view of cars
  • Make rain shield with PVC pipe so I can leave it out for days or weeks

In retrospect, a lot of these were obvious pretty incremental, silly things (like the lens and rain shield (I guess it was also a “someday in the future” list)). In this post, I’m actually gonna cover three main things:

  • Sending detected images in parallel with the sensing
  • Making a “monitoring” program on my desktop
  • Using keras to recognize cars vs not cars in the images that are sent over

Here’s an extremely bootleg flowchart of how stuff is connected:

Parallel image detection and sending

At the end of last time, I mentioned that the images being detected and sent over weren’t great because it was detecting stuff immediately, but taking a while to send, which, in the meantime prevented new images from being detected. This is called “blocking”, since the sending is “blocking” the program from continuing until it’s done sending. There are a few solutions to this, but the one that intuitively appealed to me was using multiple processes, one responsible for capturing and saving the images, and the other for sending them to my desktop. You could also just spawn a new process for each time you want to send, I think, but I went for this.

I was a little worried that this wouldn’t speed stuff up much, because this would still be saving the image inside the camera/detection part of the program, which I assumed would be a slow operation. But, I timed it, and a whole iteration of detecting/image manipulation/saving/etc is about 30ms! So it’s a huge speedup.

So, I won’t paste the whole code because it’s large, but here are the new/instrumental parts:

def processFile(fName,remoteHost,remotePath): remoteHostPath = '{}:{}'.format(remoteHost,remotePath) subprocess.check_call(['scp','-q',fName,remoteHostPath]) subprocess.check_call(['rm',fName]) def fileMonitor(logFileName,localPath,remoteHost,remotePath): print('entering filemonitor') processedFiles = [] while True: #files = os.listdir(dir) files = glob(localPath+'/'+'*.jpg') if len(files)>0: #print('sending these files:',files) [processFile(file,remoteHost,remotePath) for file in files if file not in processedFiles] [processedFiles.append(file) for file in files if file not in processedFiles] remoteHostPath = '{}:{}'.format(remoteHost,remotePath) time.sleep(0.5) subprocess.check_call(['scp','-q',localPath+'/'+logFileName,remoteHostPath]) def cameraStream(logFileName,localPath,startDateTimeString): #Camera stuff #............................ tempFName = dateString + '_' + str(boxCounter) tempPicName = tempFName + ext cv2.imwrite(localPath + '/' + tempPicName,frameDraw) fLog = open(localPath + '/' + logFileName,'a') fLog.write("{}\t{}\t{}\t{}\t{}\n".format(tempFName,x,y,x + w,y + h)) fLog.close() #Main section pool = Pool(processes=2) p1 = pool.apply_async(fileMonitor,args=(logFileName,localPath,remoteHost,remotePath)) p2 = pool.apply_async(cameraStream,args=(logFileName,localPath,startDateTimeString)) print(p1.get(timeout=3600)) print(p2.get(timeout=3600)) read more

Motion detection with the Raspberry Pi, part 1

Okay Declan, let’s try making this post a short and sweet update, not a rambling Homerian epic about simple stuff.

I got a Raspberry Pi (RPi) and an RPi camera because I wanted to learn about them and mess around with them. If I could do image recognition with them, that’d be a good platform to do ML, NN, and if I got enough data, maybe even DS type stuff. Luckily, there’s a ton of resources and code out there already. I drew upon heavily from www.pyimagesearch.com, which is a REALLY useful site, explained very great for beginners. Two articles that I basically copied code from and then butchered were this and this.

He’s not quite doing “image recognition” in this code, it’s more like “difference recognition”. Very simply, he has a stream of frames coming in from the camera. He starts off by taking what will be considered a “background frame”. Then, for all subsequent frames, he subtracts the background from the current frame, and then looks at the absolute difference (all done in grayscale, to make it simpler) of pixels. If two frames were identical, you’d expect very little different. If an object appeared in the new frame, the difference would show that object. Then, he uses some opencv tools to figure out where the object is, and draw a box around it.

I was able to put his code together and run it pretty quickly (though I removed some stuff like uploading it to dropbox, instead doing a kind of naive thing of sending the files via scp to my other machine), producing this gif of local traffic outside my window:

Of course, the devil is in the details. If you watch it a few times, you’ll notice some weird behavior. Most obviously, boxes are detected around the objects, but then the boxes appear to remain where the object was for several frames. Here you can see it frame by frame:

Why does this happen? Well it’s actually a smart feature, but done in a somewhat clumsy way. In his code, he has the following (I combined the few relevant snippets) inside the main frame capturing loop:

if avg is None: print("[INFO] starting background model...") avg = gray.copy().astype("float") rawCapture.truncate(0) continue cv2.accumulateWeighted(gray, avg, alpha) frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(avg)) read more