Any anons have experience working with opencv on python...

Any anons have experience working with opencv on python? I'm a biologist and was told by my supervisor to do a lab rotation with a focus on video analysis. I was given the goal to develop a program using opencv and python to take a video of beating heart cells treated with various stimulants and turn tracked pixels into a frequency. Problem is I have zero coding knowledge. So far I have downloaded python and pycharm have been messing around with Youtube tutorials, but It doesn't seem to be helping me because they are all based on face recognition and self driving cars. Can someone point me in the right direction? Perhaps a good blog or a list of topics to learn in order to complete my project. I have two months to develop it.

Attached: 1-main.png (356x349, 17K)

Other urls found in this thread:

docs.opencv.org/3.4.3/dc/dc3/tutorial_py_matcher.html
opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_tutorials.html
twitter.com/NSFWRedditImage

I guess you need to detect the peaks of the beats. As a complete layman in biology I assume that the cell expands quickly then contracts and then it returns to normal and the motion repeats. You have to detect the peaks of expansions and contractions and measure the time between them. The way I would do it would be to plot the area of the cell over time. First I would apply some edge detection then figure out a way to isolate the contour of the cell and finally compute the area of the contour. Maybe use some circle detection if the cell is circular or implement a custom hough transform for it. You can also try segmenting the image and measuring the area of each label and figure out a way to determine which label(s) is the cell.

You can also try searching for algorithms computing the area of a cell. I remember a few years ago some med eggheads discovered a "new way to compute the area of a cell by using mathematical integration" which was just the trapezoidal rule applied to a contour.

If you pay me some money I can do this or help you do this. Im a software developer in US.

shoo shoo poo
no one needs your shit code

Thanks for the idea. I was reading over the opencv website and found a tutorial on optical flow, although it was pretty basic. Essentially it is able to track the paths of pixels. Would it be possible to turn the vectors of the pixel paths into raw data, and then plot it somehow?

I had a huge pain in the ass compiling opencv for my machine. Good luck.

I am white I work for a largish company (1billion revenue) and I have been a top employee for the past 3 years.

Wouldnt it be better to find the rising and falling edge of the pulse? ie compare size of area of the cell to the one before, if last 3 or so frames are sequentially bigger, its a rising edge, and so on

Yes. Average the vector lengths per frame and you approximate the speed of the cell movements. High speed would indicate contractions.

Yes, that would be the step after you are able to compute the surface area per frame (or some other measure that is correlated with the size).

opencv is a mess. A mess!

This is an example of the type of videos I'll be working with. Unfortunately they are beating embryonic cells so calculating area of individual cells would be rather difficult.

Attached: cardiacbodyex.webm (332x248, 2.63M)

What would you recommend I start with to learn to do that? Right now I'm just watching basic videos, like what numpy arrays are and stuff. Do you think this project is possible in 2 months?

Can you post a picture of that kind of cell and tell us how it's supposed to move?

look up sentdex on youtube. he's the go-to guy for python videos.

You're probably going to end up doing Fourier transforms or other transformations to the images, possibly the sequence. You'll need to visualize those, as well as the histogram distributions.

OpenCV will familiarize you with the operations you'll use, but pulling stats and analysis will require Numpy (matrices, matrix ops, reductions, etc.) and Matplotlib (plotting, statistics, visualizations, etc.).

OpenCV images are kinda-sorta Numpy arrays that have an OpenCV Matrix backend. You should be able to manipulate them with Numpy ops just fine.

I posted a video, this is the typical stuff we work with.

I'd go with a circular/elliptic Hough Transform and store the radius over time. that's well documented already, you shouldn't have trouble implementing that.

I would use Feature Matching on subsequent frame.
docs.opencv.org/3.4.3/dc/dc3/tutorial_py_matcher.html

As a metric you can calculate the distance between matching points and sum up the results. Static points won't matter and moving points will add up together. Then you measure the distance in time between every second minima or do an integration and mesure the time distance between each maxima.

Weird, why would your supervisor make you work with something you have absolutely no experience in? Did you fake your resume?

A top employee in a multibillion company doesn't beg for money for a simple opencv project in a korean agriculture forum

When I used opencv I learned a few things from this site: opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_tutorials.html
Not the best tutorials but they are simple and get to the point quickly. I think they are copied from . If you need more info about what a specific function does search for it in the official opencv doc and read about it carefully.

From the video you posted the problem doesn't seem as difficult as I imagined. I would try to apply some thresholding then try some morphological filters: close to fill in the gaps in the contour and then open to get rid of external blobs, they are very simple to use. After you get a nice contour you can apply segmentation or hough transform, check the output and look for a way to select the right region. Optical flow might also work but I never used it.

The general approach of doing computer vision is:
1. Think of a solution
2. Implement it step by step and check the result of each step.
3. Analyze the result at each step. Is it was you expected it to be? Is it good enough to proceed with the next step? E.g. do the morphological filters you just applied remove the blobs that you don't care about and fill the wholes in the structure without damaging the structure too much?
4. Try to fiddle with the parameters until you get something usable or you give up. Maybe add a new step to algorithm.
5. If you still can't get what you want go back to step 1.

Computer vision is always fiddly, especially when you don't use neural networks. The project is definitely doable in 2 months.

What modality and dimension are you working on? Are you working on CT images?
If so, how's the qualitiy of the video/images?

If your resolution is high enough, you can try simpler methods:
Region growing in each frame and then measure the differences in each time frame via dice or jaccard. For medical images, I'd use ITK or VTK instead of OpenCV.

Attached: 1553664284101.gif (500x434, 598K)

What stops you from just finding a frame from the video where it's full expanded and a frame where it is fully contracted then just making the computer guess which state it is in. Then doing some basic math to find the average time between the two states?

This seems like 30 mins of work...

Thank you for the input, it really helped a lot and ill be sure to post the finished product if I'm successful.
I typically do wet lab work, but this is only a rotation and he recommended I learn to program so I can create tools like this in the future. My thesis position is already set up so I have the next 4 years to learn, but only 2 months on my current project.

It will eventually need to be calculated in real time on a live feed.

So it's going to need to detect a cell in a live video and then find the frequency at which it beat? Your gonna still need to take averages.

apparently simply substracting each frame from the last gives you an idea of whether it's moving or not.

this is the absolute difference of 2 sets of 2 frames around the times I thought I saw it move.
judging by the numbers you should be able to detect movement of the cells' edges by applying a binary mask to that. That shouldn't be a problem to do it in real time either, unlike more complicated transforms.

I'll try to make a gif of that stuff, see if it matches.

Attached: AnonCells.jpg (515x666, 134K)

Attached: chrome_424719_27032019.png (722x439, 250K)

Attached: beat.webm (280x432, 492K)

>Cont.

Ok, when you put it back together it looks like that.
Seems to work, just need to make some sort of mask or filter to tell movement apart from the noise, should be easy looking at the typical values.

anyway I'm going to sleep, good luck with this

Attached: output_jMPL77.gif (332x248, 1.17M)

you got beat to the punch

Now I feel bad, because clearly ops boss gave him this trivial problem so he could learn. Not so we could solve it for him.

Hey how did you make the webm. I know how to get the little graph at the bottom, but how did you get the other webm embedded and sync?

idk ask the guy who made it

we didn't solve shit though, my method still needs a way to count frequencies (i dunno, check histograms, too tired to give a shit anymore) and the other guy hasn't said how he did it.

Plus OP still need to write the camera interface

Attached: b49.gif (444x250, 4M)

Writing the camera interface is probably way less relevant to his education than the actual open CV stuff.

Attached: Screenshot from 2019-03-27 21-12-04.png (808x756, 139K)

>neural networks
Is it just me or neural networks are a understatement for "bruteforce"?

Yes, a deep and sophisticated kind of brute force.