datalad.api.crawl

datalad.api.crawl(path=None, is_pipeline=False, is_template=False, recursive=False, chdir=None)

Crawl online resource to create or update a dataset.

Examples

$ datalad crawl # within a dataset having .datalad/crawl/crawl.cfg

Parameters:
  • path (str or None, optional) – configuration (or pipeline if –is-pipeline) file defining crawling, or a directory of a dataset on which to perform crawling using its standard crawling specification. [Default: None]
  • is_pipeline (bool, optional) – flag if provided file is a Python script which defines pipeline(). [Default: False]
  • is_template (bool, optional) – flag if provided value is the name of the template to use. [Default: False]
  • recursive (bool, optional) – flag to crawl subdatasets as well (for now serially). [Default: False]
  • chdir (str or None, optional) – directory to chdir to for crawling. [Default: None]