Smarter Reference Frame Discussion
Todays reference frame is a decay type reference frame which is made to ensure detection of slowly moving objects. This has a negative impact on features like locate and tracking. The comet tail of Motion makes the center of Motion lagging behind the real Motion.
See discussion on IRC. Joerg and I was in a private channel and forgot to change over to #motion. So here is the log
[00:33] <joergw> the location of the center should be enough for now (for me). The direction of the movent could also be interresting, that's true.
[00:33] <Lavr> But for that to work, and for tracking to work I have been thinking about having a reference frame concept that uses a ref frame a little older.
[00:34] <Lavr> Problem today is that the object moving also makes the camera rediscover the background which is seen as part of the motion
[00:34] <joergw> Well, as you know, the algos are more my world. :-) Maybe I'll work on something that can dampen snow flakes a bit. Or large rain drops as well.
[00:35] <Lavr> The ref frame thing is important for tracking. It does not work well today.
[00:35] <joergw> I think that you cannot constantly track, while detecting motion. You have to follow an object, when it leaves a certain area and then try to resync to it (rediscover it).
[00:36] <joergw> so I thought of the 9 areas. When it moves from left to right and reaches the middle right area, reposition cam, so that it reappears in the middle leaft area.
[00:36] <Lavr> True. But if you have a person passing in front of a wall then the wall that was just covered by a person is seen as changed pixels as the person moves along.
[00:37] <Lavr> So the center of motion becomes the persons new position and the area he was on just before.
[00:37] <Lavr> And because of the way the ref frame is built it is actually worse.
[00:37] <Lavr> It is like the person drags a comet tail of motion behind him.
[00:38] <Lavr> And that makes the position reported by Motion wrong.
[00:38] <joergw> the only problem is that we are 'a bit behind'. when we know the direction, we could correct this.
[00:40] <Lavr> There are several things to consider. Only look at the NEW motion. Have a 2 second old reference frame (refreshed when camera moves).
[00:41] <Lavr> Or freeze the reference frame to the frame just before Motion is detected.
[00:41] <Lavr> For a max period of maybe 10 seconds.
[00:42] <joergw> But we have already turned off decaying when motion is above threshold. What difference would that make then? Ah... I see...
[00:43] <joergw> The object doesn't cover the ref frame, when it moves. Well, why not.
[00:43] <Lavr> ALl ideas that we could test.
[00:44] <Lavr> Working with the algos is the most funny part of Motion.
[00:44] <joergw> We could try to entirely stop updating the ref frame until after post_capture.
[00:45] <joergw> that would be easy to achieve without changing a lot.
[00:46] <Lavr> Yes. But if motion continues for a long time we may end up never stopping. Example. You turn off light. Before it was light. Now it is dark. Motion forever.
[00:47] <Lavr> So we can only do it for a short duration.
[00:47] <Lavr> But for traking it may be OK because you move the camera, which has to trigger new reference frame anyway.
[00:48] <joergw> yah, then delaying it will be the only feasible solution - besides a timer, but that can trigger some other funny effects during reset of the frame.
[00:48] <Lavr> Yes. It is not that easy. Which makes it interesting to do.
[00:49] <joergw> As a proof of concept, we can start with the 'no update at all' solution. Just to see what happens. If it turns out to be a good idea, we can develop something, that will really work.
[00:51] <joergw> We should keep in mind that not all environments provide enough ram to always keep 10 frames in a ringbuffer to delay the ref frame. There must be some smart kind of decision, that doesn't need too much tuning from the user.
[00:51] <Lavr> Logic says that no update will now work. The light off example. But not having a delay until motion is detected and then use the frozen frame for 2 seconds and then use a 2 second old frame until motion is no longer detected. That may work.
[00:52] <joergw> And it's a pity that we didn't talk about that on the channel, but in a private conversation.
[00:52] <Lavr> Yeah.
[00:52] <joergw> nono, I don't want to implement 'no update' - it's just for a lab test. To see how it behaves.
[00:54] <Lavr> To avoid the buffering the old reference frame could be an average frame. You average for 1-2 seconds and use that average frame for 1-2 seconds. While you start averaging in a new frame.
[00:54] <Lavr> That would only cost two buffers then
[00:54] <Lavr> I will post this log in a TWiki topic
[00:55] <joergw> ok, that's a good idea. Dag should be able to read it, because he is also working on a new ref frame implementation.
- 23 Aug 2007
Here is some expiremental code I have done yet. It have a new conf. option, referense_image_age. This option defines how many frames old the ref frame should be.
It works in this way:
Declares a ring buffer of virgin_images, referense_image_age length.
If no motion is detected it takes the oldes as ref. frame.
If we have motion the code works as today, and the virgin buffer is cleared.
This was made for a test to see if it was possible to detect small slow moving objects.
I can't say if it improved it or not, but please be free to test it and have ideas how to proceed.
I have read the above and it sounds interesting to test.
I just wanted to share some of my test code.
- 24 Aug 2007
The code as is today should perfectly detect slow moving objects, because the reference frame keeps part of the motion that was detected before for some time (decay). We stop the decaying, when motion is above 2 * threshold in order to have a more precise locate result for fast moving objects. The goal should be to even more improve the location of very fast moving objects. That's at least what I understood from all the discussions in the past. Please correct me, if there is a problem with slow moving objects as well.
- 25 Aug 2007
I have (before the patch was available) temorarily disabled the ref frame updates during an event to see what happens. I think we are on the right way, when we try to find a way to have some kind of 'clean' background to compare it with the actual motion. The question is just 'how to achieve this?'
I have attached two small movies as a proof of concept.
- 25 Aug 2007
There isn't a real problem with slow and small moving objects. I have studied how my ducks is going around on my farm. I did that testcode just to see if there was a differense, and it is if I have a high framerate. What I mean by slow here is less than one pixel / 2 frames, e.g. 60 sec passing the camera 320 pixel, 10 frames/sec.
If we look at the decays, it take half of previous frame, then it takes at most 4 frames to get the old image into the noise.
Have I done this calculation correct?
Then I tested what happens if I delay the ref. frame some more time f.ex. 10 seconds. It detects if my ducks moves more often, so I guess that it helps a bit. The problem is that if I only take a 10 sec. old frame as ref I get a "shadow" movement when the ref. frame have the original movement. Thats why I just cleared the buffer at movement and take the frame caused the movement as ref. frame. So it only helps detecting the first movement, and dosn't do anyting at all to help tracking.
I think it results in the same problem if we take a average in 1sec and use that as a ref. frame. It is OK when we get into the picture, but what happens if we move inside the picture for more than 1 sec? Then we are in the ref. image as a shadow.
The big problem with motion detection is about how to select ref. image.
If we have a ref. image with a "static movement", an object that exists in our ref. image and not in the picture, it results in movements whole time until we take a new ref. image. We maybe can do in this way.
Take current image as ref.
Keep it for util it timeout, or if we have no movement in the generated move picture (imgs.out) for ex. 3 frames. e.g. we find that we have a static object and have to replace our ref. image.
An other idea is that we take current image as ref. image to next, but in the positions we have detected motion we copy from the ref. image. but if there isn't any movements in the img.out pictures we take it from the picture (we have a static object).
- 26 Aug 2007
I have attached avg_ref_frame.avi. This is an example how it cal look if we have a 6sec average to build up the ref. frame. You can see how the motion rectangle jumps when the ref. frame is exchanged. You can also see what happens when I don't move for a while and then start to move it, the rectangle is from the position I started to move.
- 26 Aug 2007
I think we've got to read a bit. There's a lot of theory floating around on the web. I had a quick look today and I believe we can learn a lot from such stuff. The final solution will most probably be a combination of at least two different algos, but that may be worth it.
When I have found something really interesting, I'll share the source here.
- 27 Aug 2007
After reading some theory, it's time to come back with a feasible solution:
The patch 'smarter_ref_frame_v1.diff' patches SVN r223 and implements a new algo to build a reference frame that tries to contain the background of a scene only. When an object moves into the scene, it is excluded from the ref frame. This way we always see the whole object independant from its speed. It can even stop and is still beeing recognized. Since objects sometimes tend to stay for a long time, it is important to declare such objects to be part of the background after some time. The new code also remembers the 'static' object for a while, so that we can immediately remove it again when it starts moving. This memory is limited in time as well to avoid other bad side effects like object overlaying.
All this is based on pixels only, because object recogition is done at detection algo level (labeling) and unnecessary in the context of the ref frame. The code is working as expected under lab conditions, but a first test in my production environment revealed some problems that have to be finetuned. Especially the smartmask feature seems to be incompatible with the new code, but I'm working on it.
Nevertheless I encourage you to test this patch - it's fun! In motion.c line 1531, you can easily make the ref frame available in a timelapse video (swap the two lines) and line 1566 copies the ref frame to the built-in webserver for watching the show online.
Your feedback is appreciated!
- 16 Sep 2007
Got a major bug in the v1 patch... wait until a fix is uploaded please.
- 17 Sep 2007
I've updated the patch file with some changes. It should work ok now. The problem with smartmask should be fixed as well. I'll report how it works in production tomorrow night, when I have collected some material during the day.
- 17 Sep 2007
After reviewing the material that I have captured since last night, I can say that the object separation is working VERY well! Locating objects is now possible with very high precision that we have never seen before. But I still see problems with changing light conditions. I thought it would be enough to calculate the ref frame once every second only, but that's too slow with higher framerates - 5 fps in my case. It also turns out that more frames are recorded with the new ref frame, because object that stop moving for a while are still seen as motion. Movies are much more complete and less jumpy, but when recording pictures only, you may want to increase threshold to compensate this.
I will work on the algo to make it run with every frame and post a new patch as soon as it is tested.
- 18 Sep 2007
Another issue is performance... it currently takes lots of CPU. But the algo has to work in first place. Optimization has to wait.
- 19 Sep 2007
Update: The smartmask issue has been resolved. This should also improve smartmask behaviour with the 'old' ref frame. Moving objects can no longer cause 'blind' areas when moving around for some time. The performance problem is also solved for now. I have rewritten the code a bit. The only outstanding problem is noise_tune, but I'm sure I will fix this soon. Then I will release a formal patch so that everybody can test it. We have to find some good values for two different timers, that I would like have like fixed values in the code instead of adding new config options.
There is no updated patch for now! I will release one as soon as noise_tune is working properly again.
- 28 Sep 2007
The patch topic has been created: SmarterReferenceFramePatch
- 04 Oct 2007