top of page
  • Twitter Social Icon
  • LinkedIn Social Icon
  • Facebook Social Icon
Search

Week 3 Journal

  • Writer: Meena
    Meena
  • Apr 2, 2020
  • 2 min read

Over the last week, Jennifer and I spent time coding the foundation for our project. Since I was in possession of the OpenMV, I copied the code from the IDE and pasted it into Google Collab. This way, Jennifer and I can work on the code simultaneously, and I shared my screen so that Jennifer could see the output of our program in the IDE. The first thing we did was set up the OpenMV - downloading the IDE and making sure one of the example scripts worked. Then, we found a face detection example in OpenMV's IDE and used it as the base of our code. This example used 25 levels of Haar cascades to find a face in the frame. Once a face is identified, a bounding box is drawn on it.


Figure 1: The figure above shows how two faces (me and my sister) were detected, and a bounding box on each is drawn. The three graphs are the RGB histograms for the entire image.


After identifying the face, the next step was to isolate the background. For this, we took the pixels outside of the bounding box, and manually set them to zero. Also, if no face was detected, the output would be a black screen.


Figure 2: The video above shows an instance in which a black screen is outputted if no face is detected, and the screen outside the bounding box is black if a face is detected.


From there, we found a function that takes in the area covered in the bounding box and outputs the histogram statistics (mean, median, mode, standard deviation, lower quarter, upper quarter, minimum value, and maximum value) of that region of interest. These statistics were reported in the LAB color space. Although our intention was to find the histogram in RGB, we wanted to see if the LAB color space will produce accurate results. Finally, we were able to get an array for the average L, A, and B values for each frame.


Our next step from here is to figure out how we will plot each array over time. For this, we will have to record another array with the time stamps because every time an average LAB value is recorded, it happens when a face is detected. A face is not detected in every frame, so the time between these average LAB readings are not equivalent. Another roadblock we are currently facing is how to plot these two arrays. Our current research shows that the OpenMV IDE does not have any plotting capabilities. Unless further research turns fruitful, we might have to take our data from OpenMV and then use Matlab to plot the data.

 
 
 

Recent Posts

See All
Week 12 Journal

This week, Jennifer and I focused on collected data. First, we tested the accuracy between different amounts of face detection (using...

 
 
 
Week 11 Journal

The main focus of this week was to add improvements to the heart rate detection algorithm, and work on a couple stretch goals. During our...

 
 
 

2 Comments


Mike Briggs
Mike Briggs
Apr 17, 2020

You got an 12/10 on "Quality".

Like

Mike Briggs
Mike Briggs
Apr 11, 2020

OK, good update. Glad you got the OpenMV to work.

Like

© 2023 by Talking Business.  Proudly created with Wix.com

bottom of page