简易网站吧 关注:1,913贴子:9,835
  • 15回复贴,共1

Motion track 运动跟踪

只看楼主收藏回复

Motion Tracker
What is motion tracking?
Motion tracking - also known as ,match moving’ or ,camera tracking’ - is the reconstruction of the original recording camera (position, orientation, focal length) based on a video, i.e., 3D objects are inserted into live footage with their position, orientation, scale and motion matched to the original footage.
For example, if you have original footage in which you want to place a rendered object, the footage must be correctly analyzed and the 3D environment (the recording camera itself as well as distinctive points with their positions in three-dimensional space) must be reconstructed so the perspective and all camera movements are matched precisely.
The Object Tracking function can be seen as a Motion Tracking function. Detailed information can be found under Objekt Tracker.
This is a complex process and must therefore be completed in several steps:
Distinctive, easy to follow points (referred to as Tracks) will be ascertained in the footage and their movement followed. This can be done automatically, manually or as a combination of both and is called 2D Tracking.
Using these Tracks, a representative, spatial vertex cluster (that consist of Features) will be created, and the camera parameters will be reconstructed.
Several helper tags (Constraint tags) can now be used to calibrate the Project, i.e., the more or less disoriented vertex cluster in 3D space (of course, including the comparatively clearly defined camera) will be oriented according to the world coordinate system.
Tip:
The Camera Calibrator function has nothing to do with Motion Tracking. However, it can be used to ascertain the camera’s focal length with which the footage was filmed (which in turn helps for solving the 3D camera).
In Brief: How does Motion Tracking work?
Motion Tracking is based on the analysis and tracking of marked points (Tracks) in the original footage. Positions in 3D space can be calculated based on the different speeds with which these Tracks move depending on their distance from the camera (this effect is known as parallax scrolling).
Horizontal camera movement from left to right.
Note the difference between footage 1 and 2 in the image above. The camera moves horizontally from left to right. The red vase at the rear appears to move a shorter distance (arrow length) than the blue vase. These differences between parallaxes can be used to define a corresponding location in 3D space (from here on referred to as Track) relative to the camera.
Logically, Motion Tracking is made easier if the footage contains several parallaxes, i.e., regions with different parallax scroll speeds due to their distance from the camera.
Imagine you have footage of a flight over a city with a lot of skyscrapers: a perfect scenario for Motion Tracking in Cinema 4D with clearly separated buildings, streets in a grid pattern and clearly defined contours.
Wide open spaces or nodal pans (the camera rotates on the spot) on the other hand are much more difficult to analyze because the former lacks distinctive points of reference and the latter doesn’t offer any parallaxes. You must then select a specific Solve Mode in the Reconstruction tab to define which type of Motion Tracking should take place.
Motion Tracking workflow for camera tracking
Simplified Motion Tracking workflow.
Proceed as follows if you want to reconstruct the camera using a video sequence:
In the main menu: Select Motion Tracker | Full Solve and select the footage you want to solve. The workflow in the image above will now take place up to and including 3D reconstruction.
All went well if Deferred Solve Finished is displayed in the status bar (see also 3D Reconstruction). Now use the Constraint-Tags to calibrate the 3D reconstruction, then make a test render to see if it’s good enough. If not, the 2D Tracks have to be fine-tuned (see What are good and bad Tracks?) and then the reconstruction and calibration must be repeated.
If the status bar contains a message other than Deferred Solve Finished you can move straight to fine-tuning the 2D Tracks (see What are good and bad Tracks?) and then repeat the reconstruction and calibration until the 3D reconstruction was successful.
This is a simplified representation of the workflow. Of course flawed reconstructions can result if you select the wrong Solve Mode or if you define an incorrect Focal Length / Sensor Size for the Motion Tracker object (Reconstruction tab). However, the most important - but also the most time-consuming - work is fine-tuning the 2D Tracks.
Tip:
A video sequence can be affected by a more pronounced lens distortion (lines that are actually straight will appear curved; extreme example: fisheye effect), depending on the recording camera used. If this effect is too pronounced or Motion Tracking even fails because of this, it might be necessary to create a lens profile using the Lens Distortion tool and load it here.
General
There is only one way of checking if the process was successful. This is at the very end when you see whether or not the 3D objects added to the footage look realistic, i.e., if they don’t jump or move unnaturally.
If this is not the case you will most likely have to fine-tune or create the 2D Tracks again. You can modify a few settings but Motion Tracking depends largely on the quality of the Tracks. Motion Tracker offers as much support as possible with its Auto Track function but in the end you will have to judge for yourself which Tracks are good and which are bad - and how many new Tracks you will have to create yourself (see also What are good and bad Tracks?).
Tip:
Note that a unique Motion Tracker layout is also available. It can be selected from the Layout drop-down menu at the top right of the GUI.
Project setup example
After successfully creating a camera reconstruction, objects will have to be positioned correctly and equipped with the correct tags.
An invasion of red spheres.
In this example, an Emitter tosses spheres onto a table that roll over the table’s edge, collide with the speaker and fall to the floor. This scene uses (invisible!) proxy objects that serve as Dynamics collision objects and, in the case of the monitor, conceal the spheres behind it:
Several simple, effectively placed proxy object are used to generate correct shadows and collisions.
Each plane was oriented using a Planar Constraint tag’s Create Plane function (however, the Polygon Pen is perfectly suited for this; enable the Snap function and activate 3D Snapping and Axis Snap). Each of these proxy objects have the following tags:
Compositing tag with Compositing Background enabled and Cast Shadow disabled.
Texture tag with the footage set to Frontal Mapping.
A Rigid Body tag, if necessary, to define the object as a collision partner.
As a result of these settings, the proxy objects are not visible for rendering, except for the shadows(of course separate light sources have to be created and positioned correctly for the shadows).


IP属地:山西1楼2024-01-24 11:27回复
    C4D的运动跟踪与标签


    IP属地:山西2楼2024-01-24 11:28
    回复
      运动跟踪器
      什么是运动跟踪?
      运动跟踪(也称为“匹配移动”或“相机跟踪”)是基于视频重建原始记录相机(位置、方向、焦距),即将3D对象插入实时镜头中,使其位置、方向和运动与原始镜头相匹配。
      例如,如果您有要放置渲染对象的原始镜头,则必须正确分析镜头,并且必须重建3D环境(记录相机本身以及在三维空间中的不同点及其位置),以便精确匹配视角和所有相机移动。
      “对象跟踪”功能可以被视为“运动跟踪”功能。详细信息可在Objekt Tracker下找到。
      这是一个复杂的过程,因此必须分几个步骤完成:
      将在镜头中确定独特的、易于跟踪的点(称为轨迹),并跟踪它们的运动。这可以自动、手动或两者结合进行,称为2D跟踪。
      使用这些轨迹,将创建一个具有代表性的空间顶点簇(由特征组成),并重建相机参数。
      现在可以使用几个辅助标记(约束标记)来校准项目,即3D空间中或多或少迷失方向的顶点簇(当然,包括定义相对清晰的相机)将根据世界坐标系进行定向。
      提示:
      相机校准器功能与运动跟踪无关。然而,它可以用来确定拍摄镜头时相机的焦距(这反过来有助于解决3D相机的问题)。
      简言之:运动跟踪是如何工作的?
      运动跟踪是基于对原始镜头中标记点(轨迹)的分析和跟踪。3D空间中的位置可以根据这些轨迹移动的不同速度来计算,这取决于它们与相机的距离(这种效果被称为视差滚动)。
      相机从左到右的水平移动。
      请注意上图中镜头1和2之间的差异。摄影机从左向右水平移动。后部的红色花瓶移动的距离(箭头长度)似乎比蓝色花瓶短。视差之间的这些差异可以用于定义3D空间中相对于相机的对应位置(从这里开始称为轨迹)。
      从逻辑上讲,如果镜头包含几个视差,即由于与相机的距离而具有不同视差滚动速度的区域,则运动跟踪会变得更容易。
      想象一下,你有一段飞行在一个有很多摩天大楼的城市上空的镜头:这是4D影院运动跟踪的完美场景,有清晰分离的建筑、网格图案的街道和清晰定义的轮廓。
      另一方面,宽阔的开放空间或节点平移(相机当场旋转)更难分析,因为前者缺乏独特的参考点,而后者没有任何视差。然后,必须在“重建”选项卡中选择特定的“求解模式”,以定义应进行的运动跟踪类型。
      用于相机跟踪的运动跟踪工作流
      简化了运动跟踪工作流程。
      如果要使用视频序列重建相机,请执行以下操作:
      在主菜单中:选择“运动跟踪器|完全解算”,然后选择要解算的镜头。上图中的工作流程现在将进行到并包括3D重建。
      如果状态栏中显示“Deferred Solve Finished”(已完成延迟求解),则一切顺利(另请参见3D重建)。现在使用“约束标记”校准三维重建,然后进行测试渲染,看看它是否足够好。如果没有,则必须对2D轨迹进行微调(请参见什么是好轨迹和坏轨迹?),然后必须重复重建和校准。
      如果状态栏中包含“延迟求解已完成”以外的消息,则可以直接移动以微调2D轨迹(请参见什么是好轨迹和坏轨迹?),然后重复重建和校准,直到3D重建成功。
      这是工作流的简化表示。当然,如果选择了错误的“解算模式”,或者为“运动跟踪器”对象(“重建”选项卡)定义了错误的焦距/传感器大小,则可能会导致有缺陷的重建。然而,最重要但也是最耗时的工作是微调2D轨迹。
      提示:
      视频序列可能会受到更明显的镜头失真的影响(实际上是直的线会看起来是弯曲的;极端的例子:鱼眼效应),这取决于所使用的录音相机。如果此效果过于明显,或者运动跟踪甚至因此而失败,则可能需要使用“镜头失真”工具创建镜头轮廓并将其加载到此处。
      全体的
      只有


      IP属地:山西3楼2024-01-24 11:29
      回复
        C4D的帮助文档


        IP属地:山西4楼2024-01-24 11:29
        回复
          三个约束标签的使用,位置,平面,约束


          IP属地:山西5楼2024-01-24 11:30
          回复
            Thanks to the calibrated camera, squash-playing monsters can be composited convincingly into real-world photos.
            A common task for almost every 3D artist is compositing rendered objects into real-world scenes - stills or animations. A major problem when doing this is the unknown camera angle with which the real-world material was created - in particular perspective and focal length. These settings must also be reflected in the Cinema 4D camera that is used to render the 3D objects. Only if these settings match can the 3D objects be composited convincingly into the real-world images (of course the lighting must also be set up correctly in Cinema 4D).
            Using the Camera Calibrator tag you can load a photo/image (referred to in the following as, reference image’) and interactively reconstruct the camera's focal length, orientation and position.
            Tip:
            In addition to the objects, the calibrated camera and the light setup in the scene above there is also a Plane used as a floor (for shadow and reflection) and a Background object onto which - in both cases - the reference image is projected using camera mapping.
            For the reconstruction, Cinema 4D needs 2 vanishing points and 2 separate, vertically stacked planes. To define vanishing points, parallel lines must be drawn on the image (when displayed in a perspective view, these lines will create a vanishing point):


            IP属地:山西6楼2024-01-24 21:11
            回复
              多亏了经过校准的相机,打壁球的怪物可以令人信服地合成真实世界的照片。
              几乎每个3D艺术家的一个常见任务是将渲染的对象合成到真实世界的场景中——剧照或动画。这样做时的一个主要问题是创建真实世界材质时使用的未知相机角度,特别是视角和焦距。这些设置还必须反映在用于渲染3D对象的Cinema 4D相机中。只有这些设置匹配,3D对象才能令人信服地合成到真实世界的图像中(当然,在影院4D中也必须正确设置照明)。
              使用相机校准器标签,您可以加载照片/图像(在下文中称为“参考图像”),并以交互方式重建相机的焦距、方向和位置。
              提示:
              除了对象、校准的相机和上面场景中的灯光设置之外,还有一个用作地板(用于阴影和反射)的平面和一个背景对象,在这两种情况下,都使用相机映射将参考图像投影到该对象上。
              对于重建,影院4D需要2个消失点和2个独立的垂直堆叠平面。要定义消失点,必须在图像上绘制平行线(当在透视图中显示时,这些线将创建消失点):


              IP属地:山西7楼2024-01-24 21:11
              收起回复
                平行于Z轴的线用蓝色箭头标记,平行于X轴的线则用红色箭头标记。在接下来的步骤中,每个轴至少要标记2条线。
                •在Camera Calibrator标签的Calibrate(校准)选项卡中,单击Add Line(添加测线)按钮。单击并拖动直线的端点,然后将它们放置在图像中相应的X轴线上。


                IP属地:山西8楼2024-01-24 21:14
                回复
                  shift依次点击,可以使灭点线变为X,Y。Z轴


                  IP属地:山西9楼2024-01-24 21:15
                  回复
                    单击“创建背景对象”按钮来创建包括纹理的背景对象。然后可以将图像前面的对象渲染为背景。对于更复杂的合成,例如,如果一个动画对象应该消失在图像中的另一个对象后面,则必须对一个虚拟对象进行建模,从相机的视角将图像投影到该对象上(另请参见投影参考图像以进行渲染)。


                    IP属地:山西10楼2024-01-24 21:16
                    回复


                      IP属地:山西11楼2024-01-24 21:18
                      回复
                        反求就是通过计算,定义一个摄像机,与原来的摄像机完全匹配!摄像机矫正标签的功能很厉害!几乎可以百分百匹配


                        IP属地:山西12楼2024-01-24 21:20
                        回复


                          IP属地:山西13楼2024-01-24 21:20
                          回复
                            当云盘了哥们


                            IP属地:安徽14楼2024-02-13 22:52
                            回复