Brauner - Cognitive and Ethical Dimensions of Surrealist Art Engagement
Description
This dataset was designed to explore the interplay between cognitive adaptability, ethical sensitivity, and symbolic complexity inspired by Victor Brauner's surrealist artworks. It integrates simulated data and features derived from neuroaesthetic principles, psychological constructs, and visual metrics. The dataset supports machine learning and interpretive analysis for understanding decision-making processes under uncertainty. Features Symbolic Density: Measures the complexity of symbolic motifs within the artworks, ranging from minimal to intricate compositions. Visual Ambiguity: Quantifies perceptual tension and interpretive challenges presented by the art. Color Composition: Reflects tonal harmony and balance, a key aesthetic element. Eye-Tracking Metrics: Simulated patterns that represent gaze fixation and engagement with specific visual elements. EEG Patterns: Simulated neurophysiological data indicating neural engagement with artistic stimuli. Intuitive Decision-Making Scale (IDMS): A psychological construct capturing participants' decision-making intuitiveness. Cognitive Flexibility: Evaluates adaptability in decision-making across ambiguous contexts. Ethical Sensitivity: Represents participants' ability to discern ethical considerations in complex scenarios. Predicted Engagement: Binary or continuous variable indicating the likelihood of ethical engagement or cognitive adaptability. Artworks Analyzed: Composition with Portrait (1949), Self-Portrait (1931), Prelude to Civilization (1954), Le Rencontre du 2 bis rue Perrel (1934), Le Grand Transparent (1947), The Wolf Table (1947), and La Fin et le Début (1947) Purpose The dataset underpins research into the cognitive and ethical dimensions of leadership and decision-making, with applications in behavioral economics, experimental psychology, and neuroaesthetics. It is a foundation for training machine learning models and interpretive analyses, enabling the study of non-linear relationships and feature interactions. Applications Leadership Development: Training leaders to navigate ambiguity and ethical dilemmas through exposure to artistic stimuli. Behavioral Economics: Investigating how aesthetic experiences influence decision-making, risk tolerance, and ethical trade-offs. Machine Learning Models: Testing the efficacy of predictive models like Random Forests and Gradient Boosting for understanding cognitive and ethical engagement. Format File Type: CSV Number of Rows: Determined by the simulation process (e.g., 1,000 observations). Columns: Features described above, supplemented by interaction terms for advanced analyses. Ethical Considerations The dataset uses simulated data to avoid privacy concerns and variability in subjective interpretations, ensuring controlled and replicable results. Future Directions This dataset provides a robust basis for empirical validation with real-world participant data and broader applications in economic psychology and leadership studies.
Files
Steps to reproduce
Reproducing the dataset requires following a series of steps that involve setting up the necessary environment, running Python scripts, and ensuring all dependencies and data files are correctly configured. Below are the detailed steps: 1. Environment Setup Install Python: Ensure Python (version 3.8 or above) is installed on your system. Install Dependencies: Install the required Python libraries by running: bash Copy code pip install numpy pandas matplotlib scikit-learn shap lightgbm If you encounter specific dependency issues, refer to the provided requirements.txt file (if available). 2. Data Preparation Source Files: Use the provided files: simulated_data_with_artworks.py simulated_data_with_paintings.csv model_improvement.py verificare_tabel.py Ensure these files are stored in the same directory for easy access. Input Dataset: The simulated_data_with_paintings.csv file serves as the primary input, containing features such as symbolic density, visual ambiguity, and ethical sensitivity. 3. Generating the Dataset Run the Data Simulation Script: Execute the simulated_data_with_artworks.py script to simulate and preprocess the data. bash Copy code python simulated_data_with_artworks.py This script generates or updates the dataset by adding key features derived from simulated neurophysiological and psychological metrics. Verify Data Consistency: Use the verificare_tabel.py script to validate that all necessary features and interaction terms are present in the dataset. bash Copy code python verificare_tabel.py This ensures the data conforms to the expected structure and contains the required columns for analysis. 4. Machine Learning Analysis Train and Evaluate Models: Execute the ML_analysis_artworks.py script to train machine learning models, including Random Forest and Gradient Boosting, on the dataset. bash Copy code python ML_analysis_artworks.py This script outputs key performance metrics (e.g., accuracy, MSE) and feature importance values. Improve Model Performance: Run the model_improvement.py script to fine-tune hyperparameters and improve model performance. bash Copy code python model_improvement.py 5. Results and Validation Generate Final Table: Create the detailed results table using the feature importance, SHAP analysis, and correlation metrics generated by the scripts. Visualize Data: Optionally, use visualization tools (e.g., Matplotlib) to create plots of feature distributions, SHAP summary plots, and feature importance rankings. 6. Documentation and Notes Code Annotations: Ensure all scripts are well-documented with comments to facilitate understanding of each step. Ethical Considerations: Since this dataset is simulated, document its limitations and note the need for validation with real-world data. By following these steps, you can reproduce the dataset, conduct machine learning analyses, and validate the results for further research and applications.