cloudos-cli 2.24.0__tar.gz → 2.26.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (29) hide show
  1. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/PKG-INFO +23 -110
  2. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/README.md +22 -109
  3. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/__main__.py +148 -45
  4. cloudos_cli-2.26.0/cloudos_cli/_version.py +1 -0
  5. cloudos_cli-2.26.0/cloudos_cli/datasets/__init__.py +8 -0
  6. cloudos_cli-2.26.0/cloudos_cli/datasets/datasets.py +322 -0
  7. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/utils/errors.py +12 -0
  8. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/PKG-INFO +23 -110
  9. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/SOURCES.txt +2 -0
  10. cloudos_cli-2.24.0/cloudos_cli/_version.py +0 -1
  11. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/LICENSE +0 -0
  12. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/__init__.py +0 -0
  13. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/clos.py +0 -0
  14. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/configure/__init__.py +0 -0
  15. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/configure/configure.py +0 -0
  16. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/jobs/__init__.py +0 -0
  17. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/jobs/job.py +0 -0
  18. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/queue/__init__.py +0 -0
  19. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/queue/queue.py +0 -0
  20. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/utils/__init__.py +0 -0
  21. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli/utils/requests.py +0 -0
  22. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/dependency_links.txt +0 -0
  23. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/entry_points.txt +0 -0
  24. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/requires.txt +0 -0
  25. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/cloudos_cli.egg-info/top_level.txt +0 -0
  26. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/setup.cfg +0 -0
  27. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/setup.py +0 -0
  28. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/tests/__init__.py +0 -0
  29. {cloudos_cli-2.24.0 → cloudos_cli-2.26.0}/tests/functions_for_pytest.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.24.0
3
+ Version: 2.26.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -632,91 +632,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
632
632
  You can import new workflows to your CloudOS workspaces. The only requirements are:
633
633
 
634
634
  - The workflow is a Nextflow pipeline.
635
- - The workflow repository is located at GitHub or Bitbucket server.
635
+ - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
636
636
  - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
637
- - You have got the `repository_id` and the `repository_project_id`.
638
-
639
- **How to get `repository_id` and `repository_project_id` from a GitHub repository**
640
-
641
- **Option 1: searching in the page source code**
642
-
643
- 1. Go to the repository URL. Click on the right button of your mouse to get the following menu and click on "View Page Source".
644
-
645
- ![Github Repo right click](docs/github_right_click.png)
646
-
647
- 2. For collecting the `repository_project_id`, search for `octolytics-dimension-user_id` string in the source code. The `content` value is your `repository_project_id` (`30871219` in the example image).
648
-
649
- ![Github Repo owner id](docs/github_user_id.png)
650
-
651
- 3. For collecting the `repository_id`, search for `octolytics-dimension-repository_id` string in the source code. The `content` value is your `repository_id` (`122059362` in the example image).
652
-
653
- ![Github Repo id](docs/github_repository_id.png)
654
-
655
- **Option 2: using github CLI**
656
-
657
- If you have access to the repository, you can use the following tools to collect the required values:
658
-
659
- - [gh](https://cli.github.com/)
660
- - [jq](https://jqlang.github.io/jq/download/)
661
-
662
- For collecting the `repository_project_id`:
663
-
664
- ```
665
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
666
- OWNER="lifebit-ai"
667
- REPO="DeepVariant"
668
- repository_project_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .owner.id)
669
- echo $repository_project_id
670
- 30871219
671
- ```
672
-
673
- For collecting the `repository_id`:
674
-
675
- ```
676
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
677
- OWNER="lifebit-ai"
678
- REPO="DeepVariant"
679
- repository_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .id)
680
- echo $repository_id
681
- 122059362
682
- ```
683
-
684
- **How to get `repository_project_id` from a Bitbucket server repository**
685
-
686
- For Bitbucket server repositories, only `repository_project_id` is required. To collect it:
687
-
688
- **Option 1: using the REST API from your browser**
689
-
690
- 1. Create a REST API URL from your repo URL by adding `/rest/api/latest` to the URL:
691
-
692
- ```
693
- Original URL: https://bitbucket.com/projects/MYPROJECT/repos/my-repo
694
- REST API URL: https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo
695
- ```
696
-
697
- > IMPORTANT NOTE: Please, as your repository original URL, do not use the "clone" URL provided by Bitbucket (the one with `.git` extension), use the actual browser URL, removing the terminal `/browse`.
698
-
699
- 2. Use the REST API URL in a browser and it will generate a JSON output.
700
-
701
- 3. Your `repository_project_id` is the value of the `project.id` field.
702
-
703
- ![bitbucket project id](docs/bitbucket_project_id.png)
704
-
705
- **Option 2: using cURL**
706
-
707
- If you have access to the repository, you can use the following tools to collect the required value:
708
-
709
- - [cURL](https://curl.se/)
710
- - [jq](https://jqlang.github.io/jq/download/)
711
-
712
- For collecting the `repository_project_id`:
713
-
714
- ```
715
- BITBUCKET_TOKEN="xxx"
716
- repository_project_id=$(curl https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo -H "Authorization: Bearer $BITBUCKET_TOKEN" | jq .project.id)
717
- echo $repository_project_id
718
- 1234
719
- ```
720
637
 
721
638
  #### Usage of the workflow import command
722
639
 
@@ -726,18 +643,13 @@ To import GitHub workflows to CloudOS, you can use the following command:
726
643
  # Example workflow to import: https://github.com/lifebit-ai/DeepVariant
727
644
  WORKFLOW_URL="https://github.com/lifebit-ai/DeepVariant"
728
645
 
729
- # You will need the repository_project_id and repository_id values explained above
730
- REPOSITORY_PROJECT_ID=30871219
731
- REPOSITORY_ID=122059362
732
-
733
646
  cloudos workflow import \
734
647
  --cloudos-url $CLOUDOS \
735
648
  --apikey $MY_API_KEY \
736
649
  --workspace-id $WORKSPACE_ID \
737
650
  --workflow-url $WORKFLOW_URL \
738
651
  --workflow-name "new_name_for_the_github_workflow" \
739
- --repository-project-id $REPOSITORY_PROJECT_ID \
740
- --repository-id $REPOSITORY_ID
652
+ --platform github
741
653
  ```
742
654
 
743
655
  The expected output will be:
@@ -762,25 +674,7 @@ cloudos workflow import \
762
674
  --workflow-url $WORKFLOW_URL \
763
675
  --workflow-name "new_name_for_the_github_workflow" \
764
676
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
765
- --repository-project-id $REPOSITORY_PROJECT_ID \
766
- --repository-id $REPOSITORY_ID
767
- ```
768
-
769
- To import bitbucket server workflows, `--repository-id` parameter is not required:
770
-
771
- ```bash
772
- WORKFLOW_URL="https://bitbucket.com/projects/MYPROJECT/repos/my-repo"
773
-
774
- # You will need only the repository_project_id
775
- REPOSITORY_PROJECT_ID=1234
776
-
777
- cloudos workflow import \
778
- --cloudos-url $CLOUDOS \
779
- --apikey $MY_API_KEY \
780
- --workspace-id $WORKSPACE_ID \
781
- --workflow-url $WORKFLOW_URL \
782
- --workflow-name "new_name_for_the_bitbucket_workflow" \
783
- --repository-project-id $REPOSITORY_PROJECT_ID
677
+ --platform github
784
678
  ```
785
679
 
786
680
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -848,6 +742,25 @@ Platform workflows, i.e., those provided by CloudOS in your workspace as modules
848
742
  Therefore, CloudOS will automatically assign the valid queue and the user should not specify any queue using the `--job-queue` paramater.
849
743
  Any attempt of using this parameter will be ignored. Examples of such platform workflows are "System Tools" and "Data Factory" workflows.
850
744
 
745
+ #### Explore files programmatically
746
+
747
+ ##### Listing files
748
+
749
+ To list files present in File Explorer in a given project (whether they are analysis results, cohorts etc.), the user can run the following command:
750
+ ```
751
+ cloudos datasets ls <path> --profile <profile name>
752
+ ```
753
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to provide the following commands:
754
+ ```bash
755
+ cloudos datasets ls <path> \
756
+ --cloudos-url $CLOUDOS \
757
+ --apikey $MY_API_KEY \
758
+ --workspace-id $WORKSPACE_ID \
759
+ --project-name $PROJEC_NAME
760
+ ```
761
+ The output of this command is a list of files and folders present in the specified project.
762
+ If the `<path>` is left empty, the command will return the list of folders present in the selected project.
763
+
851
764
  ### WDL pipeline support
852
765
 
853
766
  #### Cromwell server managing
@@ -597,91 +597,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
597
597
  You can import new workflows to your CloudOS workspaces. The only requirements are:
598
598
 
599
599
  - The workflow is a Nextflow pipeline.
600
- - The workflow repository is located at GitHub or Bitbucket server.
600
+ - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
601
601
  - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
602
- - You have got the `repository_id` and the `repository_project_id`.
603
-
604
- **How to get `repository_id` and `repository_project_id` from a GitHub repository**
605
-
606
- **Option 1: searching in the page source code**
607
-
608
- 1. Go to the repository URL. Click on the right button of your mouse to get the following menu and click on "View Page Source".
609
-
610
- ![Github Repo right click](docs/github_right_click.png)
611
-
612
- 2. For collecting the `repository_project_id`, search for `octolytics-dimension-user_id` string in the source code. The `content` value is your `repository_project_id` (`30871219` in the example image).
613
-
614
- ![Github Repo owner id](docs/github_user_id.png)
615
-
616
- 3. For collecting the `repository_id`, search for `octolytics-dimension-repository_id` string in the source code. The `content` value is your `repository_id` (`122059362` in the example image).
617
-
618
- ![Github Repo id](docs/github_repository_id.png)
619
-
620
- **Option 2: using github CLI**
621
-
622
- If you have access to the repository, you can use the following tools to collect the required values:
623
-
624
- - [gh](https://cli.github.com/)
625
- - [jq](https://jqlang.github.io/jq/download/)
626
-
627
- For collecting the `repository_project_id`:
628
-
629
- ```
630
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
631
- OWNER="lifebit-ai"
632
- REPO="DeepVariant"
633
- repository_project_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .owner.id)
634
- echo $repository_project_id
635
- 30871219
636
- ```
637
-
638
- For collecting the `repository_id`:
639
-
640
- ```
641
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
642
- OWNER="lifebit-ai"
643
- REPO="DeepVariant"
644
- repository_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .id)
645
- echo $repository_id
646
- 122059362
647
- ```
648
-
649
- **How to get `repository_project_id` from a Bitbucket server repository**
650
-
651
- For Bitbucket server repositories, only `repository_project_id` is required. To collect it:
652
-
653
- **Option 1: using the REST API from your browser**
654
-
655
- 1. Create a REST API URL from your repo URL by adding `/rest/api/latest` to the URL:
656
-
657
- ```
658
- Original URL: https://bitbucket.com/projects/MYPROJECT/repos/my-repo
659
- REST API URL: https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo
660
- ```
661
-
662
- > IMPORTANT NOTE: Please, as your repository original URL, do not use the "clone" URL provided by Bitbucket (the one with `.git` extension), use the actual browser URL, removing the terminal `/browse`.
663
-
664
- 2. Use the REST API URL in a browser and it will generate a JSON output.
665
-
666
- 3. Your `repository_project_id` is the value of the `project.id` field.
667
-
668
- ![bitbucket project id](docs/bitbucket_project_id.png)
669
-
670
- **Option 2: using cURL**
671
-
672
- If you have access to the repository, you can use the following tools to collect the required value:
673
-
674
- - [cURL](https://curl.se/)
675
- - [jq](https://jqlang.github.io/jq/download/)
676
-
677
- For collecting the `repository_project_id`:
678
-
679
- ```
680
- BITBUCKET_TOKEN="xxx"
681
- repository_project_id=$(curl https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo -H "Authorization: Bearer $BITBUCKET_TOKEN" | jq .project.id)
682
- echo $repository_project_id
683
- 1234
684
- ```
685
602
 
686
603
  #### Usage of the workflow import command
687
604
 
@@ -691,18 +608,13 @@ To import GitHub workflows to CloudOS, you can use the following command:
691
608
  # Example workflow to import: https://github.com/lifebit-ai/DeepVariant
692
609
  WORKFLOW_URL="https://github.com/lifebit-ai/DeepVariant"
693
610
 
694
- # You will need the repository_project_id and repository_id values explained above
695
- REPOSITORY_PROJECT_ID=30871219
696
- REPOSITORY_ID=122059362
697
-
698
611
  cloudos workflow import \
699
612
  --cloudos-url $CLOUDOS \
700
613
  --apikey $MY_API_KEY \
701
614
  --workspace-id $WORKSPACE_ID \
702
615
  --workflow-url $WORKFLOW_URL \
703
616
  --workflow-name "new_name_for_the_github_workflow" \
704
- --repository-project-id $REPOSITORY_PROJECT_ID \
705
- --repository-id $REPOSITORY_ID
617
+ --platform github
706
618
  ```
707
619
 
708
620
  The expected output will be:
@@ -727,25 +639,7 @@ cloudos workflow import \
727
639
  --workflow-url $WORKFLOW_URL \
728
640
  --workflow-name "new_name_for_the_github_workflow" \
729
641
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
730
- --repository-project-id $REPOSITORY_PROJECT_ID \
731
- --repository-id $REPOSITORY_ID
732
- ```
733
-
734
- To import bitbucket server workflows, `--repository-id` parameter is not required:
735
-
736
- ```bash
737
- WORKFLOW_URL="https://bitbucket.com/projects/MYPROJECT/repos/my-repo"
738
-
739
- # You will need only the repository_project_id
740
- REPOSITORY_PROJECT_ID=1234
741
-
742
- cloudos workflow import \
743
- --cloudos-url $CLOUDOS \
744
- --apikey $MY_API_KEY \
745
- --workspace-id $WORKSPACE_ID \
746
- --workflow-url $WORKFLOW_URL \
747
- --workflow-name "new_name_for_the_bitbucket_workflow" \
748
- --repository-project-id $REPOSITORY_PROJECT_ID
642
+ --platform github
749
643
  ```
750
644
 
751
645
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -813,6 +707,25 @@ Platform workflows, i.e., those provided by CloudOS in your workspace as modules
813
707
  Therefore, CloudOS will automatically assign the valid queue and the user should not specify any queue using the `--job-queue` paramater.
814
708
  Any attempt of using this parameter will be ignored. Examples of such platform workflows are "System Tools" and "Data Factory" workflows.
815
709
 
710
+ #### Explore files programmatically
711
+
712
+ ##### Listing files
713
+
714
+ To list files present in File Explorer in a given project (whether they are analysis results, cohorts etc.), the user can run the following command:
715
+ ```
716
+ cloudos datasets ls <path> --profile <profile name>
717
+ ```
718
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to provide the following commands:
719
+ ```bash
720
+ cloudos datasets ls <path> \
721
+ --cloudos-url $CLOUDOS \
722
+ --apikey $MY_API_KEY \
723
+ --workspace-id $WORKSPACE_ID \
724
+ --project-name $PROJEC_NAME
725
+ ```
726
+ The output of this command is a list of files and folders present in the specified project.
727
+ If the `<path>` is left empty, the command will return the list of folders present in the selected project.
728
+
816
729
  ### WDL pipeline support
817
730
 
818
731
  #### Cromwell server managing
@@ -3,6 +3,7 @@
3
3
  import rich_click as click
4
4
  import cloudos_cli.jobs.job as jb
5
5
  from cloudos_cli.clos import Cloudos
6
+ from cloudos_cli.import_wf.import_wf import ImportGitlab, ImportGithub
6
7
  from cloudos_cli.queue.queue import Queue
7
8
  import json
8
9
  import time
@@ -11,7 +12,7 @@ import os
11
12
  import urllib3
12
13
  from ._version import __version__
13
14
  from cloudos_cli.configure.configure import ConfigurationProfile
14
-
15
+ from cloudos_cli.datasets import Datasets
15
16
 
16
17
  # GLOBAL VARS
17
18
  JOB_COMPLETED = 'completed'
@@ -64,9 +65,10 @@ def ssl_selector(disable_ssl_verification, ssl_cert):
64
65
  @click.pass_context
65
66
  def run_cloudos_cli(ctx):
66
67
  """CloudOS python package: a package for interacting with CloudOS."""
67
- print(run_cloudos_cli.__doc__ + '\n')
68
- print('Version: ' + __version__ + '\n')
69
68
  ctx.ensure_object(dict)
69
+ if ctx.invoked_subcommand not in ['datasets'] and ctx.args and ctx.args[0] == 'ls':
70
+ print(run_cloudos_cli.__doc__ + '\n')
71
+ print('Version: ' + __version__ + '\n')
70
72
  config_manager = ConfigurationProfile()
71
73
  profile_to_use = config_manager.determine_default_profile()
72
74
  if profile_to_use is None:
@@ -105,6 +107,9 @@ def run_cloudos_cli(ctx):
105
107
  },
106
108
  'bash': {
107
109
  'job': shared_config
110
+ },
111
+ 'datasets': {
112
+ 'ls': shared_config
108
113
  }
109
114
  })
110
115
  else:
@@ -143,6 +148,9 @@ def run_cloudos_cli(ctx):
143
148
  },
144
149
  'bash': {
145
150
  'job': shared_config
151
+ },
152
+ 'datasets': {
153
+ 'ls': shared_config
146
154
  }
147
155
  })
148
156
 
@@ -183,6 +191,14 @@ def bash():
183
191
  print(bash.__doc__ + '\n')
184
192
 
185
193
 
194
+ @run_cloudos_cli.group()
195
+ @click.pass_context
196
+ def datasets(ctx):
197
+ """CloudOS datasets functionality."""
198
+ if ctx.args and ctx.args[0] != 'ls':
199
+ print(datasets.__doc__ + '\n')
200
+
201
+
186
202
  @run_cloudos_cli.group(invoke_without_command=True)
187
203
  @click.option('--profile', help='Profile to use from the config file', default='default')
188
204
  @click.option('--make-default',
@@ -1037,29 +1053,21 @@ def list_workflows(ctx,
1037
1053
  required=True)
1038
1054
  @click.option('-c',
1039
1055
  '--cloudos-url',
1040
- help=(f'The CloudOS url you are trying to access to. Default={CLOUDOS_URL}.'),
1056
+ help=('The CloudOS url you are trying to access to. ' +
1057
+ f'Default={CLOUDOS_URL}.'),
1041
1058
  default=CLOUDOS_URL)
1042
1059
  @click.option('--workspace-id',
1043
1060
  help='The specific CloudOS workspace id.',
1044
1061
  required=True)
1045
- @click.option('--workflow-url',
1046
- help=('URL of the workflow to import. Please, note that it should ' +
1047
- 'be the URL shown in the browser, and it should come without ' +
1048
- 'any of the .git or /browse extensions.'),
1049
- required=True)
1050
- @click.option('--workflow-name',
1051
- help="The name that the workflow will have in CloudOS",
1052
- required=True)
1053
- @click.option('--workflow-docs-link',
1054
- help="Workflow documentation URL.",
1055
- default='')
1056
- @click.option('--repository-project-id',
1057
- type=int,
1058
- help="The ID of your repository project",
1059
- required=True)
1060
- @click.option('--repository-id',
1061
- type=int,
1062
- help="The ID of your repository. Only required for GitHub repositories")
1062
+ @click.option("--platform", type=click.Choice(["github", "gitlab"]),
1063
+ help=('Repository service where the workflow is located. Valid choices: github, gitlab. ' +
1064
+ 'Default=github'),
1065
+ default="github")
1066
+ @click.option("--workflow-name", help="The name that the workflow will have in CloudOS.", required=True)
1067
+ @click.option("-w", "--workflow-url", help="URL of the workflow repository.", required=True)
1068
+ @click.option("-d", "--workflow-docs-link", help="URL to the documentation of the workflow.", default='')
1069
+ @click.option("--cost-limit", help="Cost limit for the workflow. Default: $30 USD.", default=30)
1070
+ @click.option("--workflow-description", help="Workflow description", default="")
1063
1071
  @click.option('--disable-ssl-verification',
1064
1072
  help=('Disable SSL certificate verification. Please, remember that this option is ' +
1065
1073
  'not generally recommended for security reasons.'),
@@ -1068,19 +1076,22 @@ def list_workflows(ctx,
1068
1076
  help='Path to your SSL certificate file.')
1069
1077
  @click.option('--profile', help='Profile to use from the config file', default=None)
1070
1078
  @click.pass_context
1071
- def import_workflows(ctx,
1072
- apikey,
1073
- cloudos_url,
1074
- workspace_id,
1075
- workflow_url,
1076
- workflow_name,
1077
- repository_project_id,
1078
- workflow_docs_link,
1079
- repository_id,
1080
- disable_ssl_verification,
1081
- ssl_cert,
1082
- profile):
1083
- """Imports workflows to CloudOS."""
1079
+ def import_wf(ctx,
1080
+ apikey,
1081
+ cloudos_url,
1082
+ workspace_id,
1083
+ workflow_name,
1084
+ workflow_url,
1085
+ workflow_docs_link,
1086
+ cost_limit,
1087
+ workflow_description,
1088
+ platform,
1089
+ disable_ssl_verification,
1090
+ ssl_cert,
1091
+ profile):
1092
+ """
1093
+ Import workflows from supported repository providers.
1094
+ """
1084
1095
  profile = profile or ctx.default_map['workflow']['import']['profile']
1085
1096
  # Create a dictionary with required and non-required params
1086
1097
  required_dict = {
@@ -1106,16 +1117,12 @@ def import_workflows(ctx,
1106
1117
  )
1107
1118
 
1108
1119
  verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
1109
- print('Executing workflow import...\n')
1110
- print('\t[Message] Only Nextflow workflows are currently supported.\n')
1111
- cl = Cloudos(cloudos_url, apikey, None)
1112
- workflow_id = cl.workflow_import(workspace_id,
1113
- workflow_url,
1114
- workflow_name,
1115
- repository_project_id,
1116
- workflow_docs_link,
1117
- repository_id,
1118
- verify=verify_ssl)
1120
+ repo_services = {"gitlab": ImportGitlab, "github": ImportGithub}
1121
+ repo_cls = repo_services[platform]
1122
+ repo_import = repo_cls(cloudos_url=cloudos_url, cloudos_apikey=apikey, workspace_id=workspace_id,
1123
+ platform=platform, workflow_name=workflow_name, workflow_url=workflow_url,
1124
+ workflow_docs_link=workflow_docs_link, cost_limit=cost_limit, workflow_description=workflow_description, verify=verify_ssl)
1125
+ workflow_id = repo_import.import_workflow()
1119
1126
  print(f'\tWorkflow {workflow_name} was imported successfully with the ' +
1120
1127
  f'following ID: {workflow_id}')
1121
1128
 
@@ -1825,5 +1832,101 @@ def run_bash_job(ctx,
1825
1832
  f'\t\t--job-id {j_id}\n')
1826
1833
 
1827
1834
 
1835
+ @datasets.command(name="ls")
1836
+ @click.argument("path", required=False, nargs=1)
1837
+ @click.option('-k',
1838
+ '--apikey',
1839
+ help='Your CloudOS API key.',
1840
+ required=True)
1841
+ @click.option('-c',
1842
+ '--cloudos-url',
1843
+ help=(f'The CloudOS url you are trying to access to. Default={CLOUDOS_URL}.'),
1844
+ default=CLOUDOS_URL,
1845
+ required=True)
1846
+ @click.option('--workspace-id',
1847
+ help='The specific CloudOS workspace id.',
1848
+ required=True)
1849
+ @click.option('--disable-ssl-verification',
1850
+ help=('Disable SSL certificate verification. Please, remember that this option is ' +
1851
+ 'not generally recommended for security reasons.'),
1852
+ is_flag=True)
1853
+ @click.option('--ssl-cert',
1854
+ help='Path to your SSL certificate file.')
1855
+ @click.option('--project-name',
1856
+ help='The name of a CloudOS project.')
1857
+ @click.option('--profile', help='Profile to use from the config file', default=None)
1858
+ @click.pass_context
1859
+ def list_files(ctx,
1860
+ apikey,
1861
+ cloudos_url,
1862
+ workspace_id,
1863
+ disable_ssl_verification,
1864
+ ssl_cert,
1865
+ project_name,
1866
+ profile,
1867
+ path):
1868
+ """List contents of a path within a CloudOS workspace dataset."""
1869
+
1870
+ # fallback to ctx default if profile not specified
1871
+ profile = profile or ctx.default_map['datasets']['list'].get('profile')
1872
+
1873
+ config_manager = ConfigurationProfile()
1874
+
1875
+ required_dict = {
1876
+ 'apikey': True,
1877
+ 'workspace_id': True,
1878
+ 'workflow_name': False,
1879
+ 'project_name': False
1880
+ }
1881
+
1882
+ # Unpack profile values first
1883
+ apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
1884
+ config_manager.load_profile_and_validate_data(
1885
+ ctx,
1886
+ INIT_PROFILE,
1887
+ CLOUDOS_URL,
1888
+ profile=profile,
1889
+ required_dict=required_dict,
1890
+ apikey=apikey,
1891
+ cloudos_url=cloudos_url,
1892
+ workspace_id=workspace_id,
1893
+ workflow_name=None,
1894
+ repository_platform=None,
1895
+ execution_platform=None,
1896
+ project_name=project_name
1897
+ )
1898
+ )
1899
+
1900
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
1901
+
1902
+ datasets = Datasets(
1903
+ cloudos_url=cloudos_url,
1904
+ apikey=apikey,
1905
+ workspace_id=workspace_id,
1906
+ project_name=project_name,
1907
+ verify=verify_ssl,
1908
+ cromwell_token=None
1909
+ )
1910
+
1911
+ try:
1912
+ result = datasets.list_folder_content(path)
1913
+ contents = result.get("contents") or result.get("datasets", [])
1914
+ if not contents:
1915
+ files = result.get("files", [])
1916
+ folders = result.get("folders", [])
1917
+ contents = [{"name": f["name"], "isDir": False} for f in files] + \
1918
+ [{"name": f["name"], "isDir": True} for f in folders]
1919
+
1920
+ for item in contents:
1921
+ name = item.get("name", "")
1922
+ if item.get("isDir"):
1923
+ name = click.style(name, fg="blue", underline=True)
1924
+ click.echo(name)
1925
+
1926
+ except Exception as e:
1927
+ click.echo(f"[ERROR] {str(e)}", err=True)
1928
+
1929
+
1828
1930
  if __name__ == "__main__":
1829
1931
  run_cloudos_cli()
1932
+
@@ -0,0 +1 @@
1
+ __version__ = '2.26.0'
@@ -0,0 +1,8 @@
1
+ """
2
+ Functions and classes related to datasets.
3
+ """
4
+
5
+ from .datasets import Datasets
6
+
7
+
8
+ __all__ = ['datasets']
@@ -0,0 +1,322 @@
1
+ """
2
+ This is the main class for file explorer (datasets).
3
+ """
4
+
5
+ from dataclasses import dataclass
6
+ from typing import Union
7
+ from cloudos_cli.clos import Cloudos
8
+ from cloudos_cli.utils.requests import retry_requests_get
9
+
10
+
11
+ @dataclass
12
+ class Datasets(Cloudos):
13
+ """Class for file explorer.
14
+
15
+ Parameters
16
+ ----------
17
+ cloudos_url : string
18
+ The CloudOS service url.
19
+ apikey : string
20
+ Your CloudOS API key.
21
+ workspace_id : string
22
+ The specific Cloudos workspace id.
23
+ project_name : string
24
+ The name of a CloudOS project.
25
+ verify: [bool|string]
26
+ Whether to use SSL verification or not. Alternatively, if
27
+ a string is passed, it will be interpreted as the path to
28
+ the SSL certificate file.
29
+ project_id : string
30
+ The CloudOS project id for a given project name.
31
+ """
32
+ workspace_id: str
33
+ project_name: str
34
+ verify: Union[bool, str] = True
35
+ project_id: str = None
36
+
37
+ @property
38
+ def project_id(self) -> str:
39
+ return self._project_id
40
+
41
+ @project_id.setter
42
+ def project_id(self, v) -> None:
43
+ if isinstance(v, property):
44
+ # Fetch the value as not defined by user.
45
+ self._project_id = self.fetch_cloudos_id(
46
+ self.apikey,
47
+ self.cloudos_url,
48
+ 'projects',
49
+ self.workspace_id,
50
+ self.project_name,
51
+ verify=self.verify)
52
+ else:
53
+ # Let the user define the value.
54
+ self._project_id = v
55
+
56
+ def fetch_cloudos_id(self,
57
+ apikey,
58
+ cloudos_url,
59
+ resource,
60
+ workspace_id,
61
+ name,
62
+ mainfile=None,
63
+ importsfile=None,
64
+ repository_platform='github',
65
+ verify=True):
66
+ """Fetch the cloudos id for a given name.
67
+
68
+ Parameters
69
+ ----------
70
+ apikey : string
71
+ Your CloudOS API key
72
+ cloudos_url : string
73
+ The CloudOS service url.
74
+ resource : string
75
+ The resource you want to fetch from. E.g.: projects.
76
+ workspace_id : string
77
+ The specific Cloudos workspace id.
78
+ name : string
79
+ The name of a CloudOS resource element.
80
+ mainfile : string
81
+ The name of the mainFile used by the workflow. Only used when resource == 'workflows'.
82
+ Required for WDL pipelines as different mainFiles could be loaded for a single
83
+ pipeline.
84
+ importsfile : string
85
+ The name of the importsFile used by the workflow. Optional and only used for WDL pipelines
86
+ as different importsFiles could be loaded for a single pipeline.
87
+ repository_platform : string
88
+ The name of the repository platform of the workflow resides.
89
+ verify: [bool|string]
90
+ Whether to use SSL verification or not. Alternatively, if
91
+ a string is passed, it will be interpreted as the path to
92
+ the SSL certificate file.
93
+
94
+ Returns
95
+ -------
96
+ project_id : string
97
+ The CloudOS project id for a given project name.
98
+ """
99
+ allowed_resources = ['projects', 'workflows']
100
+ if resource not in allowed_resources:
101
+ raise ValueError('Your specified resource is not supported. ' +
102
+ f'Use one of the following: {allowed_resources}')
103
+ if resource == 'workflows':
104
+ content = self.get_workflow_list(workspace_id, verify=verify)
105
+ for element in content:
106
+ if (element["name"] == name and element["workflowType"] == "docker" and
107
+ not element["archived"]["status"]):
108
+ return element["_id"] # no mainfile or importsfile
109
+ if (element["name"] == name and
110
+ element["repository"]["platform"] == repository_platform and
111
+ not element["archived"]["status"]):
112
+ if mainfile is None:
113
+ return element["_id"]
114
+ elif element["mainFile"] == mainfile:
115
+ if importsfile is None and "importsFile" not in element.keys():
116
+ return element["_id"]
117
+ elif "importsFile" in element.keys() and element["importsFile"] == importsfile:
118
+ return element["_id"]
119
+ elif resource == 'projects':
120
+ content = self.get_project_list(workspace_id, verify=verify)
121
+ # New API projects endpoint spec
122
+ for element in content:
123
+ if element["name"] == name:
124
+ return element["_id"]
125
+ if mainfile is not None:
126
+ raise ValueError(f'[ERROR] A workflow named \'{name}\' with a mainFile \'{mainfile}\'' +
127
+ f' and an importsFile \'{importsfile}\' was not found')
128
+ else:
129
+ raise ValueError(f'[ERROR] No {name} element in {resource} was found')
130
+ def list_project_content(self):
131
+ """
132
+ Fetch the information of the directories present in the projects.
133
+
134
+ Uses
135
+ ----------
136
+ apikey : string
137
+ Your CloudOS API key
138
+ cloudos_url : string
139
+ The CloudOS service url.
140
+ workspace_id : string
141
+ The specific Cloudos workspace id.
142
+ project_id
143
+ The specific project id
144
+ """
145
+ headers = {
146
+ "Content-type": "application/json",
147
+ "apikey": self.apikey
148
+ }
149
+ r = retry_requests_get("{}/api/v2/datasets?projectId={}&teamId={}".format(self.cloudos_url,
150
+ self.project_id,
151
+ self.workspace_id),
152
+ headers=headers, verify=self.verify)
153
+ return r.json()
154
+
155
+ def list_datasets_content(self, folder_name):
156
+ """Uses
157
+ ----------
158
+ apikey : string
159
+ Your CloudOS API key
160
+ cloudos_url : string
161
+ The CloudOS service url.
162
+ workspace_id : string
163
+ The specific Cloudos workspace id.
164
+ project_id : string
165
+ The specific project id
166
+ folder_name : string
167
+ The requested folder name
168
+ """
169
+ # Prepare api request for CloudOS to fetch dataset info
170
+ headers = {
171
+ "Content-type": "application/json",
172
+ "apikey": self.apikey
173
+ }
174
+ pro_fol = self.list_project_content()
175
+ folder_id = None
176
+
177
+ if folder_name == 'AnalysesResults':
178
+ folder_name = 'Analyses Results'
179
+
180
+ for folder in pro_fol.get("datasets", []):
181
+ if folder['name'] == folder_name:
182
+ folder_id = folder['_id']
183
+ if not folder_id:
184
+ raise ValueError(f"Folder '{folder_name}' not found in project '{self.project_name}'.")
185
+ r = retry_requests_get("{}/api/v1/datasets/{}/items?teamId={}".format(self.cloudos_url,
186
+ folder_id,
187
+ self.workspace_id),
188
+ headers=headers, verify=self.verify)
189
+ return r.json()
190
+ def list_s3_folder_content(self, s3_bucket_name, s3_relative_path):
191
+ """Uses
192
+ ----------
193
+ apikey : string
194
+ Your CloudOS API key
195
+ cloudos_url : string
196
+ The CloudOS service url.
197
+ workspace_id : string
198
+ The specific Cloudos workspace id.
199
+ project_id : string
200
+ The specific project id
201
+ s3_bucket_name : string
202
+ The s3 bucket name
203
+ s3_relative_path: string
204
+ The relative path in the s3 bucket
205
+ """
206
+ # Prepare api request for CloudOS to fetch dataset info
207
+ headers = {
208
+ "Content-type": "application/json",
209
+ "apikey": self.apikey
210
+ }
211
+
212
+ r = retry_requests_get("{}/api/v1/data-access/s3/bucket-contents?bucket={}&path={}&teamId={}".format(self.cloudos_url,
213
+ s3_bucket_name,
214
+ s3_relative_path,
215
+ self.workspace_id),
216
+ headers=headers, verify=self.verify)
217
+ raw = r.json()
218
+
219
+ # Normalize response
220
+ normalized = {"folders": [], "files": []}
221
+ for item in raw.get("contents", []):
222
+ if item.get("isDir"):
223
+ item["folderType"] = "S3Folder" # 👈 inject folderType
224
+ item["s3BucketName"] = s3_bucket_name
225
+ item["s3Prefix"] = item['path']
226
+ normalized["folders"].append(item)
227
+ else:
228
+ item["s3Prefix"] = item['path']
229
+ item["s3BucketName"] = s3_bucket_name
230
+
231
+ normalized["files"].append(item)
232
+
233
+ return normalized
234
+
235
+ def list_virtual_folder_content(self, folder_id):
236
+ """Uses
237
+ ----------
238
+ apikey : string
239
+ Your CloudOS API key
240
+ cloudos_url : string
241
+ The CloudOS service url.
242
+ workspace_id : string
243
+ The specific Cloudos workspace id.
244
+ project_id : string
245
+ The specific project id
246
+ folder_id : string
247
+ The folder id of the folder whose content are to be listed
248
+ """
249
+ headers = {
250
+ "Content-type": "application/json",
251
+ "apikey": self.apikey
252
+ }
253
+
254
+ r = retry_requests_get("{}/api/v1/folders/virtual/{}/items?teamId={}".format(self.cloudos_url,
255
+ folder_id,
256
+ self.workspace_id),
257
+ headers=headers, verify=self.verify)
258
+ return r.json()
259
+ def list_folder_content(self, path=None):
260
+ """
261
+ Wrapper to list contents of a CloudOS folder.
262
+
263
+ Parameters
264
+ ----------
265
+ path : str, optional
266
+ A path like 'TopFolder', 'TopFolder/Subfolder', or deeper.
267
+ If None, lists all top-level datasets in the project.
268
+
269
+ Returns
270
+ -------
271
+ dict
272
+ JSON response from the appropriate CloudOS endpoint.
273
+ """
274
+ if not path:
275
+ return self.list_project_content()
276
+
277
+ parts = path.strip('/').split('/')
278
+
279
+ if len(parts) == 1:
280
+ return self.list_datasets_content(parts[0])
281
+
282
+ dataset_name = parts[0]
283
+ folder_content = self.list_datasets_content(dataset_name)
284
+
285
+ path_depth = 1
286
+ while path_depth < len(parts):
287
+ job_name = parts[path_depth]
288
+ found = False
289
+
290
+ for job_folder in folder_content.get("folders", []):
291
+ if job_folder["name"] == job_name:
292
+ found = True
293
+ folder_type = job_folder.get("folderType")
294
+
295
+ if folder_type == "S3Folder":
296
+ s3_bucket_name = job_folder['s3BucketName']
297
+ s3_relative_path = job_folder['s3Prefix']
298
+ if path_depth == len(parts) - 1:
299
+ return self.list_s3_folder_content(s3_bucket_name, s3_relative_path)
300
+ else:
301
+ sub_path = '/'.join(parts[0:path_depth+1])
302
+ folder_content = self.list_folder_content(sub_path)
303
+ path_depth += 1
304
+ break
305
+
306
+ elif folder_type == "VirtualFolder":
307
+ folder_id = job_folder['_id']
308
+ if path_depth == len(parts) - 1:
309
+ return self.list_virtual_folder_content(folder_id)
310
+ else:
311
+ sub_path = '/'.join(parts[0:path_depth+1])
312
+ folder_content = self.list_folder_content(sub_path)
313
+ path_depth += 1
314
+ break
315
+
316
+ else:
317
+ raise ValueError(f"Unsupported folder type '{folder_type}' for path '{path}'")
318
+
319
+ if not found:
320
+ raise ValueError(f"Folder '{job_name}' not found under dataset '{dataset_name}'")
321
+
322
+ return folder_content
@@ -30,3 +30,15 @@ class TimeOutException(Exception):
30
30
  "Status: {}; Reason: {}".format(rv.status_code, rv.reason))
31
31
  super(TimeOutException, self).__init__(msg)
32
32
  self.rv = rv
33
+
34
+
35
+ class AccountNotLinkedException(Exception):
36
+ """
37
+ Displays a meaningful message when the user tries to import a repository from an account that is not linked
38
+ with their cloudOS account
39
+ """
40
+ def __init__(self, wf_url):
41
+ msg = (f"The pipeline at the URL {wf_url} cannot be imported. Check that you repository account " +
42
+ "has been linked in your cloudOS workspace")
43
+ super(AccountNotLinkedException, self).__init__(msg)
44
+ self.wf_url = wf_url
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.24.0
3
+ Version: 2.26.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -632,91 +632,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
632
632
  You can import new workflows to your CloudOS workspaces. The only requirements are:
633
633
 
634
634
  - The workflow is a Nextflow pipeline.
635
- - The workflow repository is located at GitHub or Bitbucket server.
635
+ - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
636
636
  - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
637
- - You have got the `repository_id` and the `repository_project_id`.
638
-
639
- **How to get `repository_id` and `repository_project_id` from a GitHub repository**
640
-
641
- **Option 1: searching in the page source code**
642
-
643
- 1. Go to the repository URL. Click on the right button of your mouse to get the following menu and click on "View Page Source".
644
-
645
- ![Github Repo right click](docs/github_right_click.png)
646
-
647
- 2. For collecting the `repository_project_id`, search for `octolytics-dimension-user_id` string in the source code. The `content` value is your `repository_project_id` (`30871219` in the example image).
648
-
649
- ![Github Repo owner id](docs/github_user_id.png)
650
-
651
- 3. For collecting the `repository_id`, search for `octolytics-dimension-repository_id` string in the source code. The `content` value is your `repository_id` (`122059362` in the example image).
652
-
653
- ![Github Repo id](docs/github_repository_id.png)
654
-
655
- **Option 2: using github CLI**
656
-
657
- If you have access to the repository, you can use the following tools to collect the required values:
658
-
659
- - [gh](https://cli.github.com/)
660
- - [jq](https://jqlang.github.io/jq/download/)
661
-
662
- For collecting the `repository_project_id`:
663
-
664
- ```
665
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
666
- OWNER="lifebit-ai"
667
- REPO="DeepVariant"
668
- repository_project_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .owner.id)
669
- echo $repository_project_id
670
- 30871219
671
- ```
672
-
673
- For collecting the `repository_id`:
674
-
675
- ```
676
- # If your repo URL is https://github.com/lifebit-ai/DeepVariant
677
- OWNER="lifebit-ai"
678
- REPO="DeepVariant"
679
- repository_id=$(gh api -H "Accept: application/vnd.github+json" repos/$OWNER/$REPO | jq .id)
680
- echo $repository_id
681
- 122059362
682
- ```
683
-
684
- **How to get `repository_project_id` from a Bitbucket server repository**
685
-
686
- For Bitbucket server repositories, only `repository_project_id` is required. To collect it:
687
-
688
- **Option 1: using the REST API from your browser**
689
-
690
- 1. Create a REST API URL from your repo URL by adding `/rest/api/latest` to the URL:
691
-
692
- ```
693
- Original URL: https://bitbucket.com/projects/MYPROJECT/repos/my-repo
694
- REST API URL: https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo
695
- ```
696
-
697
- > IMPORTANT NOTE: Please, as your repository original URL, do not use the "clone" URL provided by Bitbucket (the one with `.git` extension), use the actual browser URL, removing the terminal `/browse`.
698
-
699
- 2. Use the REST API URL in a browser and it will generate a JSON output.
700
-
701
- 3. Your `repository_project_id` is the value of the `project.id` field.
702
-
703
- ![bitbucket project id](docs/bitbucket_project_id.png)
704
-
705
- **Option 2: using cURL**
706
-
707
- If you have access to the repository, you can use the following tools to collect the required value:
708
-
709
- - [cURL](https://curl.se/)
710
- - [jq](https://jqlang.github.io/jq/download/)
711
-
712
- For collecting the `repository_project_id`:
713
-
714
- ```
715
- BITBUCKET_TOKEN="xxx"
716
- repository_project_id=$(curl https://bitbucket.com/rest/api/latest/projects/MYPROJECT/repos/my-repo -H "Authorization: Bearer $BITBUCKET_TOKEN" | jq .project.id)
717
- echo $repository_project_id
718
- 1234
719
- ```
720
637
 
721
638
  #### Usage of the workflow import command
722
639
 
@@ -726,18 +643,13 @@ To import GitHub workflows to CloudOS, you can use the following command:
726
643
  # Example workflow to import: https://github.com/lifebit-ai/DeepVariant
727
644
  WORKFLOW_URL="https://github.com/lifebit-ai/DeepVariant"
728
645
 
729
- # You will need the repository_project_id and repository_id values explained above
730
- REPOSITORY_PROJECT_ID=30871219
731
- REPOSITORY_ID=122059362
732
-
733
646
  cloudos workflow import \
734
647
  --cloudos-url $CLOUDOS \
735
648
  --apikey $MY_API_KEY \
736
649
  --workspace-id $WORKSPACE_ID \
737
650
  --workflow-url $WORKFLOW_URL \
738
651
  --workflow-name "new_name_for_the_github_workflow" \
739
- --repository-project-id $REPOSITORY_PROJECT_ID \
740
- --repository-id $REPOSITORY_ID
652
+ --platform github
741
653
  ```
742
654
 
743
655
  The expected output will be:
@@ -762,25 +674,7 @@ cloudos workflow import \
762
674
  --workflow-url $WORKFLOW_URL \
763
675
  --workflow-name "new_name_for_the_github_workflow" \
764
676
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
765
- --repository-project-id $REPOSITORY_PROJECT_ID \
766
- --repository-id $REPOSITORY_ID
767
- ```
768
-
769
- To import bitbucket server workflows, `--repository-id` parameter is not required:
770
-
771
- ```bash
772
- WORKFLOW_URL="https://bitbucket.com/projects/MYPROJECT/repos/my-repo"
773
-
774
- # You will need only the repository_project_id
775
- REPOSITORY_PROJECT_ID=1234
776
-
777
- cloudos workflow import \
778
- --cloudos-url $CLOUDOS \
779
- --apikey $MY_API_KEY \
780
- --workspace-id $WORKSPACE_ID \
781
- --workflow-url $WORKFLOW_URL \
782
- --workflow-name "new_name_for_the_bitbucket_workflow" \
783
- --repository-project-id $REPOSITORY_PROJECT_ID
677
+ --platform github
784
678
  ```
785
679
 
786
680
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -848,6 +742,25 @@ Platform workflows, i.e., those provided by CloudOS in your workspace as modules
848
742
  Therefore, CloudOS will automatically assign the valid queue and the user should not specify any queue using the `--job-queue` paramater.
849
743
  Any attempt of using this parameter will be ignored. Examples of such platform workflows are "System Tools" and "Data Factory" workflows.
850
744
 
745
+ #### Explore files programmatically
746
+
747
+ ##### Listing files
748
+
749
+ To list files present in File Explorer in a given project (whether they are analysis results, cohorts etc.), the user can run the following command:
750
+ ```
751
+ cloudos datasets ls <path> --profile <profile name>
752
+ ```
753
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to provide the following commands:
754
+ ```bash
755
+ cloudos datasets ls <path> \
756
+ --cloudos-url $CLOUDOS \
757
+ --apikey $MY_API_KEY \
758
+ --workspace-id $WORKSPACE_ID \
759
+ --project-name $PROJEC_NAME
760
+ ```
761
+ The output of this command is a list of files and folders present in the specified project.
762
+ If the `<path>` is left empty, the command will return the list of folders present in the selected project.
763
+
851
764
  ### WDL pipeline support
852
765
 
853
766
  #### Cromwell server managing
@@ -13,6 +13,8 @@ cloudos_cli.egg-info/requires.txt
13
13
  cloudos_cli.egg-info/top_level.txt
14
14
  cloudos_cli/configure/__init__.py
15
15
  cloudos_cli/configure/configure.py
16
+ cloudos_cli/datasets/__init__.py
17
+ cloudos_cli/datasets/datasets.py
16
18
  cloudos_cli/jobs/__init__.py
17
19
  cloudos_cli/jobs/job.py
18
20
  cloudos_cli/queue/__init__.py
@@ -1 +0,0 @@
1
- __version__ = '2.24.0'
File without changes
File without changes
File without changes