Filter Results
2447958 results
The traffic conditions in developing nations of Asia are highly heterogeneous due to the presence of various vehicle types. The objectives of this paper are as follows. To study the road traffic capacity and delays at urban merging sections consisting of the mixed traffic stream. To analyse the interactions of vehicles both laterally and longitudinally by the inclusion of vehicle-type dependent factor. Using the data collected at a five-lane urban merging section using video recording method for the evaluation. To study the relation between macroscopic parameters (speed, flow, density, occupancy) and microscopic parameters (lateral clearance, average gap, space headway, lateral movement duration) is established by considering vehicle-type dependent factor. To analyse the effect of overtaking characteristics of different vehicle types at merge sections under mixed traffic conditions. The findings from this present research help in the operational analysis of merging locations on high-speed urban roads in Malaysia.
Data Types:
  • Other
  • Video
  • Tabular Data
  • Dataset
  • Document
A dataset for human counting in a room. The data consists of images that captured using a CCTV in a room where the outside of the room is visible. It poses a challenge to discriminate between in-room and out-room humans for the model that is developed using this dataset.
Data Types:
  • Other
  • Dataset
  • File Set
The dataset is a set of network traffic traces in pcap/csv format captured from a single user. The traffic is classified in 5 different activities (Video, Bulk, Idle, Web, and Interactive) and the label is shown in the filename. There is also a file (mapping.csv) with the mapping of the host's IP address, the csv/pcap filename and the activity label. Activities: Interactive: applications that perform real-time interactions in order to provide a suitable user experience, such as editing a file in google docs and remote CLI's sessions by SSH. Bulk data transfer: applications that perform a transfer of large data volume files over the network. Some examples are SCP/FTP applications and direct downloads of large files from web servers like Mediafire, Dropbox or the university repository among others. Web browsing: contains all the generated traffic while searching and consuming different web pages. Examples of those pages are several blogs and new sites and the moodle of the university. Vídeo playback: contains traffic from applications that consume video in streaming or pseudo-streaming. The most known server used are Twitch and Youtube but the university online classroom has also been used. Idle behaviour: is composed by the background traffic generated by the user computer when the user is idle. This traffic has been captured with every application closed and with some opened pages like google docs, YouTube and several web pages, but always without user interaction. The capture is performed in a network probe, attached to the router that forwards the user network traffic, using a SPAN port. The traffic is stored in pcap format with all the packet payload. In the csv file, every non TCP/UDP packet is filtered out, as well as every packet with no payload. The fields in the csv files are the following (one line per packet): Timestamp, protocol, payload size, IP address source and destination, UDP/TCP port source and destination. The fields are also included as a header in every csv file. The amount of data is stated as follows: Bulk : 19 traces, 3599 s of total duration, 8704 MBytes of pcap files Video : 23 traces, 4496 s, 1405 MBytes Web : 23 traces, 4203 s, 148 MBytes Interactive : 42 traces, 8934 s, 30.5 MBytes Idle : 52 traces, 6341 s, 0.69 MBytes The code of our machine learning approach is also included. There is a README.txt file with the documentation of how to use the code.
Data Types:
  • Other
  • Tabular Data
  • Dataset
  • File Set
This paper introduces a new dataset for solving the ground-based cloud images classification task. We name it ‘Cloud-ImVN 1.0’ which is an extension of SWIMCAT database. This dataset contains 6 categories of clouds images which consists of 2,100 color images (150 × 150 pixels). Several task can be applied on this dataset including classification, clustering and segmentation in both supervised and unsupervised learning context.
Data Types:
  • Other
  • Dataset
  • File Set
There are four main folders in the project: code, data, models and logdir. Data This folder contains all the data used from the two studied locations: Loc.1 (latitude=40.4º, longitude=6.0º) and Loc.2 (latitude=39.99º, longitude=-0.06º). Sorted by year, month and day, each location has three kinds of data: • The files named as just a number are 151x151 irradiance estimates matrices centered in the same location obtained from http://msgcpp.knmi.nl. The spatial resolution is 0.03º for both latitude and longitude. • The files named Real_ are the irradiance measurements at the location • The files named CopernicusClear_ are the clear sky estimates from the CAMS McClear model Each file contains the 96 15-minute samples for the same day in Matlab format and UTC time. Code All the python scripts used to train the neural networks and perform the forecasts. The main files are: • tf1.yml: List of the modules and versions used. A clean Anaconda environment created from this file can run all the code in the project. • learnRadiation.py: The script to train a new model. Changing the variables “paper_model_name” and “location”. The first variable selects the kind of model to fit and the second one the training location. • predictOnly.py: Loads a trained model and performs the forecast. Notice that the model and location must match the ones used to train the model stored in the “training_path” folder Models This folder contains all the trained models and their forecasting results. There is also a training folder to contain the last trained model. Logdir This folder stores Tensorboard files during training How to train and test a model A new model can be trained using “learnRadiation.py”. This script has three parameters • location: Selects the location where the model will be trained (LOC1 or LOC2) • paper_model_name: This sets the inputs to match the ones used in the models from the article. • training_path: The folder to save the trained model Then the “predictOnly.py” script allows performing the forecasts. It is important to set the same parameters as in the “learnRadiation.py” script. This program will generate the predictions and save them in the model folder. It also plots some days, which can be modified at the bottom of the script. For instance for LOC2 and model TOA & all real we would run: "python learnRadiation.py TOAallreal LOC2 training" This will train the neural network and save the results in the folder models/training. After this, we would generate the results and plot some days using: “python predictOnly.py TOAallreal LOC2 training” This will save the forecasts and real values in the training folder and show figures with 1 to 6 hour forecasts The models used for the article can also be evaluated by using predictOnly.py and targeting their folders. For instance, to evaluate the TOA & all real model used in the article, this command must be used: “python predictOnly.py TOAallreal LOC2 RtoaAllReal”
Data Types:
  • Other
  • Software/Code
  • Dataset
  • Document
Integrated thermodynamic-volume database for the Ni-Al γ/γ′ binary phases
Data Types:
  • Other
  • Dataset
This study included 451 anonymized UWF and 745 FP images. The ultra-widefield (UWF) images, which include both normal and pathologic retinal images, were based on Tsukazaki Optos Public Project. The traditional fundus photograph (FP) images were extracted from the publicly accessible database by using the Google image and Google dataset search that included English keywords related to retina. The search strategy was based on the following key terms: “fundus photography”, “retinal image”, and “fundus dataset”. The images were manually reviewed by two board-certified ophthalmologists, and blurred and low-quality images were removed to clarify the image domains. Duplicated images were also removed. Consequently, 451 images with artifacts and 745 images without artifacts were collected. The UWF images were cropped and masked after registration for CycleGAN.
Data Types:
  • Other
  • Dataset
  • File Set
Data for the manuscript "Influence of information presentation manner on risky choice".
Data Types:
  • Other
  • Software/Code
  • Tabular Data
  • Dataset
Foremost, my pleasure and encouragement rest with future endeavors, research, computations, and empirical and experimental analyses, which are welcomed, that will exceed beyond the expectations of own research. Pertaining to the Excel dataset, please refer to article titled, “Corporate Cultures at the Forefront of MNE Organization Structures, Evaluating Cost and Profit Centers” for in-depth empirical findings. All data were collected from external observations and publicly disclosed Annual Reports, Sustainability Reports (CSRs) and SEC filings of 10-K and 10-Q forms. Earnings per shares (EPS) were converted to U.S. dollars (USD). Specifically refer to datasheet labeled “TEST” in the Excel file for executive intervention analysis. For replication of the empirical analysis, i.e. the cultural intervention analysis and its respective forecasting and diagnostic figures, the Gretl statistical software was used. Enclosed please refer to the corresponding Gretl session to conduct IMA models. Please note that despite the few degrees of freedom, IMA models are not as susceptible to irrelevant variable lag biases (overparamterization) as with other ARIMA, times series models. Also as mentioned before, spuriousness within EPS data points beyond the mid-1990’s were incompatible for sound inferences. Unfortunately, stronger structural break tests or unit root tests as robustness checks such as Monte Carlo tests for ARIMA (0,1,1) should be conducted. Nonetheless, time series assumptions were not violated, and the time-tested, Box-Jenkins approach successfully worked. The findings suggest that EPS remains a sought-after measurable, quantitative financial indicator for predicting organizational shifts as it relates to profit and cost centers of multinational enterprises (MNEs), specifically truck manufacturers. Evidence suggests that EPS movements follow those of MNE executive policies, whether organizational restructuring of profit centers or adopting new accounting protocols that impact the corporate culture. Hence, the conclusions imply a scientific notion that shareholder ownership of MNE and enhancing long-run operating profits distort executives' organizational decision-making processes often resulting in negative externalities such as a cannibalization effect.
Data Types:
  • Other
  • Tabular Data
  • Dataset
The objective of this dataset was to present the forage biomass production over time in different pasture management systems. We selected two farms located in the Western region of São Paulo State, Brazil. Pasture field data collection was carried out in two farms during three dates (June and November 2018 and March 2019) over two seasons (wet and dry). Samples were regularly taken through time to monitor forage biomass. These fields represent a wide variety of pasture management, as follow: Farm 1 (Santa Clara): i) traditional, low forage productivity, cattle rotation; ii) traditional, intermediate forage productivity, fertilized, cattle rotation; iii) intensified pasture, high forage productivity, reformed, cattle rotation. Farm 2 (Poderosa): i) traditional degraded*, recently reformed with millet + grass, cattle rotation; ii) traditional, low forage productivity, signs of degradation, fertilized, cattle rotation. *degraded was based on visual analysis of pasture area with sparse grass and exposed soil in some areas. With the support of NDVI images from the MODIS sensor, sample pixels were used to allocate the sample points. The areas of these pixels were divided into nine sampling points and in each of these points, the forage biomass was collected. Soil analyses were also carried out in two seasons (June 2018 and March 2019). The data files were organized in three folders. Each folder represents one field campaign. These folders have a shapefile of all the fields, the same file in kml extension (to open on Google Earth) and a zip file with photography of each field during the field campaign. The attribute table of the shapefile has a description of the fields and biomass. Excel files show the same information of the attribute table and a description of the items. A figure with the template of the biomass collection scheme is also available. Soil analyses are in the folders 'June 2018' and 'March 2019'. A more detailed description and discussion about these data and their association with soil chemical analysis were described in a scientific report (available by request). The biomass collection allowed the analysis of the forage production and better diagnoses about resource utilization strategies over the different pasture systems. This work was funded by the São Paulo Research Foundation (process numbers 2018/10770-1, 2017/06037-4, 2016/08741-8, 2017/08970-0, 2018/11052-5 and 2014/26767-9) as part of the Global Sustainable Bioenergy Initiative.
Data Types:
  • Software/Code
  • Geospatial Data
  • Tabular Data
  • Dataset
  • Document
  • File Set