Collection and construction of library for computational narration generation
The development of automated video creation systems through computational narration necessitates the modelling of narrative progression and the creation of algorithms capable of generating novel structures via blending. The comprehension of narrative structure necessitates the utilisation of formal and computable representations, as well as the extraction of temporal information. Therefore, the utilisation of metaphors, imagery, and other narrative components that are spontaneously generated in response to user input is necessary. The application of computational narratology has the potential to generate adaptive storylines, poems, and music that respond to the reader or listener. The discipline of computational narratology has assimilated and implemented methodologies from narratology in the humanities that delineate formal and/or logical structure. In general, significant concerns pertain to the representation of narrative advancement and the development of computational procedures capable of producing novel configurations through amalgamation. Hence, in this repository , we are adding a datasets and code for building such systems that assist in automatic video generation. The dataset comprises various fields and their corresponding descriptions. The concept of a "Interpretable Cluster" refers to a group of related elements, specifically a string of 2000 characters. These elements are used to construct a computational scene or narration, consisting of seven distinct components. The subject matter pertains to a term and a string with a numerical value of 300.