top of page
  • Twitter Social Icon
  • LinkedIn Social Icon
  • Facebook Social Icon
Search

Week 4 Journal

  • Writer: Meena
    Meena
  • Apr 9, 2020
  • 2 min read

The first thing on our agenda this week was to find a way to keep track of the time when a face is detected and the average L, A, and B values of the region of interest (the bounding box containing the face). Jennifer and I found a function to keep track of elapsed time and were able to keep track of the time in each frame in the same manner as the average LAB values. Next, we realized that the LAB system may not show the variation in skin tone since the three axes in this system are illumination (white-black), yellow-green, and red-blue. Since the green channel is supposed to depict the heart rate with greater accuracy, we figured it would be best to convert the LAB values to RGB.


Figure 1: This video shows how the OpenMV detected my face and prints the timestamp, average L value, average A value, and average B value arrays for each frame in the terminal monitor at the bottom of the video.


With the time and average RGB colors in each frame recorded, Jennifer plotted these on Matlab. Next, I played around with a few different filters to see if it would improve the average RGB values. I realized that camera had a grayish tint. That is, the skin color it detected was more gray than brown. Since I was unsure whether this would cause difficulties, I played around with a few different filters to see if the plots of the RGB channels over time will be better.


Figure 2: A small clip of how the OpenMV shows the raw output, with no filters.


Figure 3: A small video of how the addition of the Laplacian filter, which added more contrast, hopefully preventing the colors from having a gray tint.


Figure 4: A small clip of the filter called illuminvar, which removes the illumination from the clips, meaning only the color gradients are remaining. I hoped this approach would focus more on the change in color than features like the eyes which can be a source of 'distraction'.


While implementing these various filters, I noticed that the program was taking a while to run. Previously, I set the background to black if it was not a part of the bounding box which was drawn around a face, and the whole screen to black if no face was detected. This was to ensure that the background would not be a source of noise when finding the average RGB values of the image because the goal was to find the average values of the pixels on the face. However, when extracting the statistics, the region of interest was chosen to be the bounding box, which will always contain the face. Therefore, it was unnecessary to have code to black out the screen - this saved a lot of computational complexity.


By the end of this week, Jennifer and I have achieved all of our week 3 milestones. We have successfully implemented the face detection feature, and isolated the region of interest (when taking down the statistics of the image). In addition, we were able to take the average RGB values for each frame, and plot them over time.

 
 
 

Recent Posts

See All
Week 12 Journal

This week, Jennifer and I focused on collected data. First, we tested the accuracy between different amounts of face detection (using...

 
 
 
Week 11 Journal

The main focus of this week was to add improvements to the heart rate detection algorithm, and work on a couple stretch goals. During our...

 
 
 

2 comentários


Mike Briggs
Mike Briggs
17 de abr. de 2020

You got an 12/10 on "Quality".

Curtir

Mike Briggs
Mike Briggs
12 de abr. de 2020

OK, good progress.

Curtir

© 2023 by Talking Business.  Proudly created with Wix.com

bottom of page