Reference and Target Datasets with the Point-Descriptor-Precedence representation
There are four reference datasets generated using the animations given in slides 1-4 of the Animation.pptx: Reference Movement Pattern with one-point, two-points, four-points, and two-points for the length, width and speed analysis. The animations are stored in the video format and then a sequence of images were extracted to create static reference datasets/movement patterns. The Target dataset consists of 46 vehicles and is created using the animation given in Target.pptm, which has a built in macro to run the simulation and save it in a video format. The video is then used to extract images which matched exactly with the reference datasets. This resulted in finding the exact matches of the reference datasets. The code is written in Python and can be run with any Python IDE.
Steps to reproduce
Copy the animations shown in Target.pptm and Animation.pptx into a MS PowerPoint application. Use the following macro in the Macros of the PowerPoint application: Sub PowerPointVideo() If ActivePresentation.CreateVideoStatus <> ppMediaTaskStatusInProgress Then ActivePresentation.CreateVideo FileName:=Environ("USERPROFILE") & "\Desktop\Your PowerPoint Video.mp4", _ UseTimingsAndNarrations:=True, _ VertResolution:=1080, _ FramesPerSecond:=10, _ Quality:=100 Else: MsgBox "There is another conversion to video in progress" End If End Sub Run the Macro. This will create the respective video of the animation in the working directory. Use the video to images.py file in any Python Editor and run this file with the videos generated. By using the PDP approach given in the paper, calculate the PDP representation of each image that has been extracted and compare it with that of the reference images. The images of target dataset whose PDP representation matches with those of the reference dataset, would be considered as a match for that reference movement pattern. Repeat this analysis with the two-point and four-point reference datasets.