Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
55a7561
docs: configure Sphinx for NumPy-style docstrings and API docs
claude Jan 25, 2026
e76a773
docs: add autodoc mock imports and API reference page
claude Jan 25, 2026
9851e7c
docs: add comprehensive codebase analysis document
claude Jan 25, 2026
63bd81d
docs: add NumPy-style docstrings to public API (Priority 1)
claude Jan 25, 2026
4ecd8fc
docs: remove codebase analysis file (saved locally)
claude Jan 25, 2026
e2d9715
chore: ignore local TODO_AND_ISSUES.md
claude Jan 25, 2026
5dd156d
fix: resolve Sphinx docstring formatting errors
claude Jan 25, 2026
cdc4cab
docs: add NumPy-style docstrings to core classes (Priority 2)
claude Jan 25, 2026
38f68b8
fix: remove duplicate device_names from Attributes section
claude Jan 25, 2026
8c7c3c1
docs: configure Sphinx for NumPy-style docstrings and API docs
claude Jan 25, 2026
da06202
docs: add autodoc mock imports and API reference page
claude Jan 25, 2026
028147e
docs: add comprehensive codebase analysis document
claude Jan 25, 2026
edf157b
docs: add NumPy-style docstrings to public API (Priority 1)
claude Jan 25, 2026
852f600
docs: remove codebase analysis file (saved locally)
claude Jan 25, 2026
ccc361b
chore: ignore local TODO_AND_ISSUES.md
claude Jan 25, 2026
92fbd14
fix: resolve Sphinx docstring formatting errors
claude Jan 25, 2026
41f5d6c
docs: add NumPy-style docstrings to core classes (Priority 2)
claude Jan 25, 2026
5a278fd
fix: remove duplicate device_names from Attributes section
claude Jan 25, 2026
45e19ef
minor changes in docstrings.
oliveira-caio Jan 25, 2026
4a24327
Merge branch 'claude/add-docstrings-KhA9Q' of github.com:oliveira-cai…
oliveira-caio Jan 25, 2026
acbcb20
minor changes in docstrings.
oliveira-caio Jan 25, 2026
3654d80
docs: convert remaining docstrings to NumPy style
claude Jan 25, 2026
d44017d
docs: restructure API with autosummary for separate pages
claude Jan 25, 2026
0491519
docs: move configuration to dedicated page with default.yaml content
claude Jan 25, 2026
79bed6c
docs: rename API page to avoid duplicate sidebar entry
claude Jan 25, 2026
9019d34
docs: rename toctree caption to Reference to avoid confusion
claude Jan 25, 2026
d34fbee
minor spelling changes.
oliveira-caio Jan 25, 2026
97d194b
docs: rewrite README with professional structure
claude Jan 25, 2026
26aa980
modified readme.
oliveira-caio Jan 25, 2026
8d093e7
removed local files from gitignore.
oliveira-caio Jan 25, 2026
6dd5782
added docs/source/generated back to gitignore.
oliveira-caio Jan 25, 2026
0c2576e
ran black.
oliveira-caio Jan 25, 2026
2ef1499
fixed minor issues with the pyproject.toml
oliveira-caio Jan 29, 2026
9c86b1e
minor fixes in the docstrings and readme.
oliveira-caio Jan 29, 2026
313bd9b
docs: add NumPy-style docstring to nan_filter and clean up comments
claude Jan 29, 2026
3a00795
docs: add inline comments back to nan_filter implementation
claude Jan 29, 2026
427654c
docs: merge configuration reference into tutorials page
claude Jan 29, 2026
25b60fe
docs: replace dataloader options with link to PyTorch docs
claude Jan 29, 2026
3975057
minor changes in the demo_configs.rst
oliveira-caio Jan 29, 2026
138f058
Merge branch 'claude/add-docstrings-KhA9Q' of github.com:oliveira-cai…
oliveira-caio Jan 29, 2026
d1eaffa
docs: refactor interpolator docstrings
claude Jan 29, 2026
0be8971
docs: add Sphinx cross-references to tutorial pages
claude Jan 29, 2026
326b050
Merge branch 'claude/add-docstrings-KhA9Q' of github.com:oliveira-cai…
oliveira-caio Jan 29, 2026
ee749f8
minor fix to the pip install.
oliveira-caio Jan 29, 2026
8242d84
ran black.
oliveira-caio Jan 29, 2026
b4a6058
Merge branch 'main' into claude/add-docstrings-KhA9Q
pollytur Mar 2, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,5 @@ cython_debug/

*.sif
*.bak

docs/source/generated/
91 changes: 79 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,98 @@
# Experanto

Experanto is a Python package designed for interpolating recordings and stimuli in neuroscience experiments. It enables users to load single or multiple experiments and create efficient dataloaders for machine learning applications.

## Features

- **Unified Experiment Interface**: Load and query multi-modal neuroscience data (neural responses, eye tracking, treadmill, visual stimuli) through a single `Experiment` class
- **Flexible Interpolation**: Interpolate data at arbitrary time points with support for linear and nearest-neighbor methods
- **Multi-Session Support**: Combine data from multiple recording sessions into a single dataloader
- **Configurable Preprocessing**: YAML-based configuration for sampling rates, normalization, transforms, and filtering
- **PyTorch Integration**: Native PyTorch `Dataset` and `DataLoader` implementations optimized for training

## Docs
[![Docs](https://readthedocs.org/projects/experanto/badge/?version=latest)](https://experanto.readthedocs.io/)

## Installation
To install Experanto, clone locally and run:

```bash
pip install -e /path_to/experanto
git clone https://github.com/sensorium-competition/experanto.git
cd experanto
pip install -e .
```

To replicate the `generate_sample` example, install:
### Note

To replicate the `generate_sample` example, use the following command (see [allen_exporter](https://github.com/sensorium-competition/allen-exporter)):

```bash
pip install -e /path_to/allen_exporter
pip install -e /path/to/allen_exporter
```
(Repository: [allen_exporter](https://github.com/sensorium-competition/allen-exporter))

To replicate the `sensorium_example`, also install the following with their dependencies:
To replicate the `sensorium_example` (see [sensorium_2023](https://github.com/ecker-lab/sensorium_2023)), install neuralpredictors (see [neuralpredictors](https://github.com/sinzlab/neuralpredictors)) as well:

```bash
pip install -e /path_to/neuralpredictors
pip install -e /path/to/neuralpredictors
pip install -e /path/to/sensorium_2023
```
(Repository: [neuralpredictors](https://github.com/sinzlab/neuralpredictors))

```bash
pip install -e /path_to/sensorium_2023
## Quick Start

### Loading an Experiment

```python
from experanto.experiment import Experiment

# Load a single experiment
exp = Experiment("/path/to/experiment")

# Query data at specific time points
import numpy as np
times = np.linspace(0, 10, 100) # 100 time points over 10 seconds

# Get interpolated data and a boolean mask with valid time points from all devices
data, valid = exp.interpolate(times)

# Or from a specific device
responses, valid = exp.interpolate(times, device="responses")
```

### Configuration

Experanto uses YAML configuration files. See `configs/default.yaml` for all options:

```yaml
dataset:
modality_config:
responses:
sampling_rate: 8
chunk_size: 16
transforms:
normalization: "standardize"
screen:
sampling_rate: 30
chunk_size: 60
transforms:
normalization: "normalize"

dataloader:
batch_size: 16
num_workers: 2
```
(Repository: [sensorium_2023](https://github.com/ecker-lab/sensorium_2023))
Ensure you replace `/path_to/` with the actual path to the cloned repositories.
## Documentation
Full documentation is available at [Read the Docs](https://experanto.readthedocs.io/).
- [Installation Guide](https://experanto.readthedocs.io/en/latest/concepts/installation.html)
- [Getting Started](https://experanto.readthedocs.io/en/latest/concepts/getting_started.html)
- [API Reference](https://experanto.readthedocs.io/en/latest/api.html)
- [Configuration Options](https://experanto.readthedocs.io/en/latest/configuration.html)
## Contributing
Contributions are welcome! Please open an issue or submit a pull request on [GitHub](https://github.com/sensorium-competition/experanto).
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
Empty file added docs/source/_static/.gitkeep
Empty file.
31 changes: 31 additions & 0 deletions docs/source/_templates/custom-class-template.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{{ fullname | escape | underline}}

.. currentmodule:: {{ module }}

.. autoclass:: {{ objname }}
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

{% block methods %}
{% if methods %}
.. rubric:: Methods

.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}

{% block attributes %}
{% if attributes %}
.. rubric:: Attributes

.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
96 changes: 96 additions & 0 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
Classes and functions
=====================

This section documents all public classes and functions in Experanto.

Core Classes
------------

.. autosummary::
:toctree: generated
:template: custom-class-template.rst
:nosignatures:

experanto.experiment.Experiment
experanto.datasets.ChunkDataset

Interpolators
-------------

.. autosummary::
:toctree: generated
:template: custom-class-template.rst
:nosignatures:

experanto.interpolators.Interpolator
experanto.interpolators.SequenceInterpolator
experanto.interpolators.PhaseShiftedSequenceInterpolator
experanto.interpolators.ScreenInterpolator
experanto.interpolators.TimeIntervalInterpolator
experanto.interpolators.ScreenTrial
experanto.interpolators.ImageTrial
experanto.interpolators.VideoTrial
experanto.interpolators.BlankTrial
experanto.interpolators.InvalidTrial

Time Intervals
--------------

.. autosummary::
:toctree: generated
:template: custom-class-template.rst
:nosignatures:

experanto.intervals.TimeInterval

.. autosummary::
:toctree: generated
:nosignatures:

experanto.intervals.uniquefy_interval_array
experanto.intervals.find_intersection_between_two_interval_arrays
experanto.intervals.find_intersection_across_arrays_of_intervals
experanto.intervals.find_union_across_arrays_of_intervals
experanto.intervals.find_complement_of_interval_array
experanto.intervals.get_stats_for_valid_interval

Dataloaders
-----------

.. autosummary::
:toctree: generated
:nosignatures:

experanto.dataloaders.get_multisession_dataloader
experanto.dataloaders.get_multisession_concat_dataloader

Utilities
---------

.. autosummary::
:toctree: generated
:template: custom-class-template.rst
:nosignatures:

experanto.utils.LongCycler
experanto.utils.ShortCycler
experanto.utils.FastSessionDataLoader
experanto.utils.MultiEpochsDataLoader
experanto.utils.SessionConcatDataset
experanto.utils.SessionBatchSampler
experanto.utils.SessionSpecificSampler

.. autosummary::
:toctree: generated
:nosignatures:

experanto.utils.add_behavior_as_channels

Filters
-------

.. autosummary::
:toctree: generated
:nosignatures:

experanto.filters.common_filters.nan_filter
64 changes: 64 additions & 0 deletions docs/source/concepts/demo_configs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,3 +114,67 @@ You can change parameters programmatically:
cfg.dataset.modality_config.screen.include_blanks = True
cfg.dataset.modality_config.screen.valid_condition = {"tier": "train"}
cfg.dataloader.num_workers = 8


Configuration options
^^^^^^^^^^^^^^^^^^^^^

Dataset options
"""""""""""""""

``global_sampling_rate``
Override sampling rate for all modalities. Set to ``null`` to use per-modality rates.
Comment on lines +125 to +126
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why null and not None in these examples?


``global_chunk_size``
Override chunk size for all modalities. Set to ``null`` to use per-modality sizes.

``add_behavior_as_channels``
If ``True``, concatenate behavioral data (e.g., eye tracker, treadmill) as
additional channels to the screen data.

``replace_nans_with_means``
If ``True``, replace NaN values with the mean of non-NaN values.

``cache_data``
If ``True``, cache interpolated data in memory for faster access.

``out_keys``
List of modality keys to include in the output dictionary.

``normalize_timestamps``
If ``True``, normalize timestamps to start from 0.

Modality options
""""""""""""""""

Each modality (e.g., screen, responses, eye_tracker, treadmill) supports:

``keep_nans``
Whether to keep NaN values in the output.

``sampling_rate``
Sampling rate in Hz for this modality.

``chunk_size``
Number of samples per chunk.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure myself, but maybe "data points" or "time steps" is better than "samples" here.


``offset``
Time offset in seconds relative to the screen timestamps.

``transforms``
Dictionary of transforms to apply. Supports ``"normalize"`` (0-1 scaling)
and ``"standardize"`` (z-score normalization).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually supports also custom transforms, defined as dicts containing key _target_ whose value contains a class + optional arguments as further key-value pairs.
E.g., something like this

"SelectOdorProperties": {
    "_target_": "odor_model.dataset_transforms.SelectOdorTrialPropertiesAsModalitiesTransform",
    "properties": ["graph"],
}


``interpolation``
Interpolation settings including ``interpolation_mode`` (``"linear"`` or
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe mention that this is only for Sequence interpolators

``"nearest_neighbor"``).

``filters``
Dictionary of filter functions to apply to the data.

Dataloader options
""""""""""""""""""

All standard ``torch.utils.data.DataLoader`` options are supported. See the
`PyTorch DataLoader documentation <https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader>`_
for the full list of available parameters.
4 changes: 2 additions & 2 deletions docs/source/concepts/demo_dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Loading a dataset object
========================

Dataset objects organize experimental data (from **Experiment** class) for machine learning tasks, offering project-specific and configurable access for training and evaluation. They often serve as a source for creating **dataloaders**.
Dataset objects organize experimental data (from the :class:`~experanto.experiment.Experiment` class) for machine learning tasks, offering project-specific and configurable access for training and evaluation. They often serve as a source for creating dataloaders (see :func:`~experanto.dataloaders.get_multisession_dataloader`).

Key features of dataset objects
-------------------------------
Expand Down Expand Up @@ -81,7 +81,7 @@ This will output something like:
Defining dataloaders
---------------------
Once the dataset is verified, we can define **DataLoader** objects for training or other purposes. This allows easy batch processing during training:
Once the dataset is verified, we can define `DataLoader <https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader>`_ objects for training or other purposes. This allows easy batch processing during training:

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/source/concepts/demo_experiment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Loading a single experiment
===========================

To load an experiment, we use the **Experiment** class. This is particularly useful for testing whether the formatting and interpolation behave as expected before loading multiple experiments into dataset objects.
To load an experiment, we use the :class:`~experanto.experiment.Experiment` class. This is particularly useful for testing whether the formatting and interpolation behave as expected before loading multiple experiments into dataset objects.

Loading an experiment
---------------------
Expand Down Expand Up @@ -34,7 +34,7 @@ All compatible modalities for the loaded experiment can be checked using:

Interpolating data
------------------
Once the modalities are identified, we can interpolate their data.
Once the modalities are identified, we can interpolate their data using :meth:`~experanto.experiment.Experiment.interpolate`.
The following example interpolates a 20-second window with 2 frames per second, resulting in 40 images:

.. code-block:: python
Expand Down
6 changes: 3 additions & 3 deletions docs/source/concepts/demo_multisession.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
Loading multiple sessions
=========================

To load multiple sessions at once, you can use the ``get_multisession_dataloader`` function from ``experanto.dataloaders``.
To load multiple sessions at once, you can use :func:`~experanto.dataloaders.get_multisession_dataloader`.

This function takes:

- A list of paths pointing to your experiment directories
- A configuration dictionary, similar to the one used for loading a single dataset

It returns a dictionary of ``MultiEpochsDataLoader`` objects, each corresponding to a session, loaded with the specified configurations.
It returns a dictionary of :class:`~experanto.utils.MultiEpochsDataLoader` objects, each corresponding to a session, loaded with the specified configurations.

Example
-------
Expand All @@ -30,4 +30,4 @@ Example
# Load first two sessions
train_dl = get_multisession_dataloader(full_paths[:2], cfg)

The returned ``train_dl`` is a dictionary containing two ``MultiEpochsDataLoader`` objects which can be used for training.
The returned ``train_dl`` is a dictionary containing two :class:`~experanto.utils.MultiEpochsDataLoader` objects which can be used for training.
2 changes: 1 addition & 1 deletion docs/source/concepts/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ The package works on top of `jupyter/datascience-notebooks`, but the minimum req

to install the package, clone it into a local repository and run::

pip -e install /path_to_folder/experanto
pip install -e /path/to/folder/experanto

Loading
Loading