Filter Results
97984 results
  • [0.7.2] - 2020-04-07 Added Added same step loggers' metrics aggregation (#1278) Added parity test between a vanilla MNIST model and lightning model (#1284) Added parity test between a vanilla RNN model and lightning model (#1351) Added Reinforcement Learning - Deep Q-network (DQN) lightning example (#1232) Added support for hierarchical dict (#1152) Added TrainsLogger class (#1122) Added type hints to pytorch_lightning.core (#946) Added support for IterableDataset in validation and testing (#1104) Added support for non-primitive types in hparams for TensorboardLogger (#1130) Added a check that stops the training when loss or weights contain NaN or inf values. (#1097) Updated references to self.forward() to instead use the __call__ interface. (#1211) Added support for IterableDataset when val_check_interval=1.0 (default), this will trigger validation at the end of each epoch. (#1283) Added summary method to Profilers. (#1259) Added informative errors if user defined dataloader has zero length (#1280) Added testing for python 3.8 (#915) Added a training_epoch_end method which is the mirror of validation_epoch_end. (#1357) Added model configuration checking (#1199) Added support for optimizer frequencies through LightningModule.configure_optimizers() (#1269) Added option to run without an optimizer by returning None from configure_optimizers. (#1279) Added a warning when the number of data loader workers is small. (#1378) Changed Changed (renamed and refatored) TensorRunningMean -> TensorRunningAccum: running accumulations were generalized. (#1278) Changed progress_bar_refresh_rate trainer flag to disable progress bar when set to 0. (#1108) Enhanced load_from_checkpoint to also forward params to the model (#1307) Updated references to self.forward() to instead use the __call__ interface. (#1211) Changed default behaviour of configure_optimizers to use no optimizer rather than Adam. (#1279) Allow to upload models on W&B (#1339) On DP and DDP2 unsqueeze is automated now (#1319) Did not always create a DataLoader during reinstantiation, but the same type as before (if subclass of DataLoader) (#1346) Did not interfere with a default sampler (#1318) Remove default Adam optimizer (#1317) Give warnings for unimplemented required lightning methods (#1317) Enhanced load_from_checkpoint to also forward params to the model (#1307) Made evaluate method private >> Trainer._evaluate(...). (#1260) Simplify the PL examples structure (shallower and more readable) (#1247) Changed min max gpu memory to be on their own plots (#1358) Remove .item which causes sync issues (#1254) Changed smoothing in TQDM to decrease variability of time remaining between training / eval (#1194) Change default logger to dedicated one (#1064) Deprecated Deprecated Trainer argument print_nan_grads (#1097) Deprecated Trainer argument show_progress_bar (#1108) Removed Removed duplicated module pytorch_lightning.utilities.arg_parse for loading CLI arguments (#1167) Removed wandb logger's finalize method (#1193) Dropped torchvision dependency in tests and added own MNIST dataset class instead (#986) Fixed Fixed model_checkpoint when saving all models (#1359) Trainer.add_argparse_args classmethod fixed. Now it adds a type for the arguments (#1147) Fixed bug related to type cheking of ReduceLROnPlateau lr schedulers(#1114) Fixed a bug to ensure lightning checkpoints to be backward compatible (#1132) Fixed a bug that created an extra dataloader with active reload_dataloaders_every_epoch (#1181 Fixed all warnings and errors in the docs build process (#1191) Fixed an issue where val_percent_check=0 would not disable validation (#1251) Fixed average of incomplete TensorRunningMean (#1309) Fixed WandbLogger.watch with wandb.init() (#1311) Fixed an issue with early stopping that would prevent it from monitoring training metrics when validation is disabled / not implemented (#1235). Fixed a bug that would cause trainer.test() to run on the validation set when overloading validation_epoch_end and test_end (#1353). Fixed WandbLogger.watch - use of the watch method without importing wandb (#1311) Fixed WandbLogger to be used with 'ddp' - allow reinits in sub-processes (#1149, #1360) Made training_epoch_end behave like validation_epoch_end (#1357) Fixed fast_dev_run running validation twice (#1365) Fixed pickle error from quick patch __code__ (#1352) Fixed memory leak on GPU0 (#1094, #1349) Fixed checkpointing interval (#1272) Fixed validation and training loops run the partial dataset (#1192) Fixed running on_validation_end only on main process in DDP (#1125) Fixed load_spawn_weights only in proc rank 0 (#1385) Fixes use_amp issue (#1145) Fixes using deprecated use_amp attribute (#1145) Fixed Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0 (#1375). Fixed Unimplemented backend XLA error on TPU (#1387)
    Data Types:
    • Software/Code
  • The large diversity of cell-types of the brain provides the means by which circuits perform complex operations. Understanding such diversity is one of the key challenges of modern neuroscience. Neurons have many unique electrophysiological and behavioral features from which parallel cell-type classification can be inferred. To leverage this, we built the Cell Explorer, a framework for analyzing and characterizing single cells recorded using extracellular electrodes. It can be separated into three components: a standardized yet flexible data structure, a single yet extensive processing script, and a powerful graphical interface. Through the processing, a high dimensional representation is built from electrophysiological and functional features including the spike waveform, spiking statistics, monosynaptic connections, and behavioral spiking dynamics. The user-friendly interactive graphical interface allows for classification and exploration of those features, through a rich set of built-in plots, interaction modes, cell grouping, and filters. Powerful figures can be created for publications. Opto-tagged cells and public access to reference data have been incorporated to help you characterize your data better. The framework is built entirely in MATLAB making it fast and intuitive to implement and incorporate the Cell Explorer into your pipelines and analysis scripts. You can expand it with your metrics, plots, and opto-tagged data. Official 1.2 release
    Data Types:
    • Software/Code
    • File Set
  • Tools for interacting with the publicly available California Delta Fish Salvage Database, including continuous deployment of data access, analysis, and presentation.
    Data Types:
    • Software/Code
  • 13.13.0 (2020-04-08) Features vtkImageMapper: support coincident topology (8f73a76)
    Data Types:
    • Software/Code
  • The bids-validator is a software tool to check neuroimaging datasets for adherence to the Brain Imaging Data Structure (BIDS) format. More information about BIDS can be found at bids.neuroimaging.io.
    Data Types:
    • Software/Code
    • File Set
  • Pequenos Cientistas em Casa Acompanha as atividades que o Centro Ciência Viva do Algarve publica para poderes fazer em casa. Diverte-te em família, fazendo as nossas atividade e partilha connosco os resultado através de fotografias ou vídeo. #PequenosCientistasEmCasa #PequenosCientistasCCVAlg HOJE: Conheces o aquário inframareal da Ria Formosa? Descobre e faz o teu. ATENÇÃO!! Não saias de casa para adquirir nenhum destes materiais. Se não os tiveres todos, aguarda pelo nosso próximo desafio! #PequenosCientistasEmCasa #PequenosCientistasCCVAlg #ccvalg #cienciaviva #cienciavivaemcasa #ficaemcasa #aquario #riaformosa #inframarial
    Data Types:
    • Video
  • Code and data for the published article. We develop differentially private methods for estimating various distributional properties. Given a sample from a discrete distribution p, some functional f, and accuracy and privacy parameters alpha and epsilon, the goal is to estimate f(p) up to accuracy alpha, while maintaining epsilon-differential privacy of the sample. We prove almost-tight bounds on the sample size required for this problem for several functionals of interest, including support size, support coverage, and entropy. We show that the cost of privacy is negligible in a variety of settings, both theoretically and experimentally. Our methods are based on a sensitivity analysis of several state-of-the-art methods for estimating these properties with sublinear sample complexities
    Data Types:
    • Software/Code
  • This release is associated with the final resubmission of the paper Frequency, Acceptability, and Selection: A case study of clause-embedding to Glossa post-acceptance.
    Data Types:
    • Software/Code
  • Analytic pipeline for Mahgoub et al. 2020 (Zcwpw1)
    Data Types:
    • Software/Code
  • Repository of the paper Distributed Nonlinear Control Design using Separable Control Contraction Metrics
    Data Types:
    • Software/Code
    • File Set
5