cloudos-cli 2.27.0__tar.gz → 2.30.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/PKG-INFO +98 -5
  2. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/README.md +97 -4
  3. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/__main__.py +387 -18
  4. cloudos_cli-2.30.0/cloudos_cli/_version.py +1 -0
  5. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/datasets/datasets.py +48 -7
  6. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/import_wf/__init__.py +1 -1
  7. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/import_wf/import_wf.py +33 -29
  8. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/PKG-INFO +98 -5
  9. cloudos_cli-2.27.0/cloudos_cli/_version.py +0 -1
  10. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/LICENSE +0 -0
  11. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/__init__.py +0 -0
  12. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/clos.py +0 -0
  13. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/configure/__init__.py +0 -0
  14. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/configure/configure.py +0 -0
  15. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/datasets/__init__.py +0 -0
  16. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/jobs/__init__.py +0 -0
  17. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/jobs/job.py +0 -0
  18. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/queue/__init__.py +0 -0
  19. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/queue/queue.py +0 -0
  20. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/utils/__init__.py +0 -0
  21. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/utils/errors.py +0 -0
  22. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/utils/requests.py +0 -0
  23. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli/utils/resources.py +0 -0
  24. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/SOURCES.txt +0 -0
  25. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/dependency_links.txt +0 -0
  26. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/entry_points.txt +0 -0
  27. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/requires.txt +0 -0
  28. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/cloudos_cli.egg-info/top_level.txt +0 -0
  29. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/setup.cfg +0 -0
  30. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/setup.py +0 -0
  31. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/tests/__init__.py +0 -0
  32. {cloudos_cli-2.27.0 → cloudos_cli-2.30.0}/tests/functions_for_pytest.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.27.0
3
+ Version: 2.30.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -535,6 +535,69 @@ Executing status...
535
535
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
536
536
  ```
537
537
 
538
+ #### Check job details
539
+
540
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
541
+
542
+ For example, with explicit variable for required parameters:
543
+
544
+ ```bash
545
+ cloudos job details \
546
+ --apikey $MY_API_KEY \
547
+ --job-id 62c83a1191fe06013b7ef355
548
+ ```
549
+
550
+ Or with a defined profile:
551
+
552
+ ```bash
553
+ cloudos job details \
554
+ --profile job-details \
555
+ --job-id 62c83a1191fe06013b7ef355
556
+ ```
557
+
558
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
559
+
560
+ ```console
561
+ Executing details...
562
+ Job Details
563
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
564
+ ┃ Field ┃ Value ┃
565
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
566
+ │ Parameters │ -test=value │
567
+ │ │ --gaq=test │
568
+ │ │ cryo=yes │
569
+ │ Command │ echo 'test' > new_file.txt │
570
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
571
+ │ Nextflow Version │ None │
572
+ │ Execution Platform │ Batch AWS │
573
+ │ Profile │ None │
574
+ │ Master Instance │ c5.xlarge │
575
+ │ Storage │ 500 │
576
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
577
+ │ Accelerated File Staging │ None │
578
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
579
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
580
+ ```
581
+
582
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
583
+
584
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
585
+
586
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
587
+
588
+ ```console
589
+ params {
590
+ parameter_one = value_one
591
+ parameter_two = value_two
592
+ parameter_three = value_three
593
+ }
594
+ ```
595
+
596
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
597
+
598
+ > [!NOTE]
599
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
600
+
538
601
  #### Get a list of your jobs from a CloudOS workspace
539
602
 
540
603
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -632,8 +695,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
632
695
  You can import new workflows to your CloudOS workspaces. The only requirements are:
633
696
 
634
697
  - The workflow is a Nextflow pipeline.
635
- - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
636
- - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
698
+ - The workflow repository is located at GitHub, GitLab or BitBucket Server (specified by the option `--repository-platform`. Available options: `github`, `gitlab` and `bitbucketServer`)
699
+ - If your repository is private, you have access to the repository and to have linked your GitHub, Gitlab or Bitbucket server accounts to CloudOS.
637
700
 
638
701
  #### Usage of the workflow import command
639
702
 
@@ -649,7 +712,7 @@ cloudos workflow import \
649
712
  --workspace-id $WORKSPACE_ID \
650
713
  --workflow-url $WORKFLOW_URL \
651
714
  --workflow-name "new_name_for_the_github_workflow" \
652
- --platform github
715
+ --repository-platform github
653
716
  ```
654
717
 
655
718
  The expected output will be:
@@ -674,7 +737,7 @@ cloudos workflow import \
674
737
  --workflow-url $WORKFLOW_URL \
675
738
  --workflow-name "new_name_for_the_github_workflow" \
676
739
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
677
- --platform github
740
+ --repository-platform github
678
741
  ```
679
742
 
680
743
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -771,6 +834,36 @@ If you require more information on the files and folder listed, you can use the
771
834
  - Filepath (the file or folder name)
772
835
  - S3 Path
773
836
 
837
+ ##### Moving files
838
+
839
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
840
+
841
+ 1. The move can happen **within the same project** running the following command:
842
+ ```
843
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
844
+ ```
845
+ where the source project as well as the destination one is the one defined in the profile.
846
+
847
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
848
+ ```
849
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
850
+ ```
851
+ In this case, only the source project is the one specified in the profile.
852
+
853
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
854
+
855
+ ```
856
+ cloudos datasets mv Data/results/my_plot.png Data/plots
857
+ ```
858
+
859
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
860
+ ```bash
861
+ --cloudos-url $CLOUDOS \
862
+ --apikey $MY_API_KEY \
863
+ --workspace-id $WORKSPACE_ID \
864
+ --project-name $PROJEC_NAME
865
+ ```
866
+
774
867
  ### WDL pipeline support
775
868
 
776
869
  #### Cromwell server managing
@@ -500,6 +500,69 @@ Executing status...
500
500
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
501
501
  ```
502
502
 
503
+ #### Check job details
504
+
505
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
506
+
507
+ For example, with explicit variable for required parameters:
508
+
509
+ ```bash
510
+ cloudos job details \
511
+ --apikey $MY_API_KEY \
512
+ --job-id 62c83a1191fe06013b7ef355
513
+ ```
514
+
515
+ Or with a defined profile:
516
+
517
+ ```bash
518
+ cloudos job details \
519
+ --profile job-details \
520
+ --job-id 62c83a1191fe06013b7ef355
521
+ ```
522
+
523
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
524
+
525
+ ```console
526
+ Executing details...
527
+ Job Details
528
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
529
+ ┃ Field ┃ Value ┃
530
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
531
+ │ Parameters │ -test=value │
532
+ │ │ --gaq=test │
533
+ │ │ cryo=yes │
534
+ │ Command │ echo 'test' > new_file.txt │
535
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
536
+ │ Nextflow Version │ None │
537
+ │ Execution Platform │ Batch AWS │
538
+ │ Profile │ None │
539
+ │ Master Instance │ c5.xlarge │
540
+ │ Storage │ 500 │
541
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
542
+ │ Accelerated File Staging │ None │
543
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
544
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
545
+ ```
546
+
547
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
548
+
549
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
550
+
551
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
552
+
553
+ ```console
554
+ params {
555
+ parameter_one = value_one
556
+ parameter_two = value_two
557
+ parameter_three = value_three
558
+ }
559
+ ```
560
+
561
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
562
+
563
+ > [!NOTE]
564
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
565
+
503
566
  #### Get a list of your jobs from a CloudOS workspace
504
567
 
505
568
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -597,8 +660,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
597
660
  You can import new workflows to your CloudOS workspaces. The only requirements are:
598
661
 
599
662
  - The workflow is a Nextflow pipeline.
600
- - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
601
- - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
663
+ - The workflow repository is located at GitHub, GitLab or BitBucket Server (specified by the option `--repository-platform`. Available options: `github`, `gitlab` and `bitbucketServer`)
664
+ - If your repository is private, you have access to the repository and to have linked your GitHub, Gitlab or Bitbucket server accounts to CloudOS.
602
665
 
603
666
  #### Usage of the workflow import command
604
667
 
@@ -614,7 +677,7 @@ cloudos workflow import \
614
677
  --workspace-id $WORKSPACE_ID \
615
678
  --workflow-url $WORKFLOW_URL \
616
679
  --workflow-name "new_name_for_the_github_workflow" \
617
- --platform github
680
+ --repository-platform github
618
681
  ```
619
682
 
620
683
  The expected output will be:
@@ -639,7 +702,7 @@ cloudos workflow import \
639
702
  --workflow-url $WORKFLOW_URL \
640
703
  --workflow-name "new_name_for_the_github_workflow" \
641
704
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
642
- --platform github
705
+ --repository-platform github
643
706
  ```
644
707
 
645
708
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -736,6 +799,36 @@ If you require more information on the files and folder listed, you can use the
736
799
  - Filepath (the file or folder name)
737
800
  - S3 Path
738
801
 
802
+ ##### Moving files
803
+
804
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
805
+
806
+ 1. The move can happen **within the same project** running the following command:
807
+ ```
808
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
809
+ ```
810
+ where the source project as well as the destination one is the one defined in the profile.
811
+
812
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
813
+ ```
814
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
815
+ ```
816
+ In this case, only the source project is the one specified in the profile.
817
+
818
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
819
+
820
+ ```
821
+ cloudos datasets mv Data/results/my_plot.png Data/plots
822
+ ```
823
+
824
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
825
+ ```bash
826
+ --cloudos-url $CLOUDOS \
827
+ --apikey $MY_API_KEY \
828
+ --workspace-id $WORKSPACE_ID \
829
+ --project-name $PROJEC_NAME
830
+ ```
831
+
739
832
  ### WDL pipeline support
740
833
 
741
834
  #### Cromwell server managing
@@ -3,17 +3,18 @@
3
3
  import rich_click as click
4
4
  import cloudos_cli.jobs.job as jb
5
5
  from cloudos_cli.clos import Cloudos
6
- from cloudos_cli.import_wf.import_wf import ImportGitlab, ImportGithub
6
+ from cloudos_cli.import_wf.import_wf import ImportWorflow
7
7
  from cloudos_cli.queue.queue import Queue
8
+ from cloudos_cli.utils.errors import BadRequestException
8
9
  import json
9
10
  import time
10
11
  import sys
11
12
  from ._version import __version__
12
13
  from cloudos_cli.configure.configure import ConfigurationProfile
13
- from cloudos_cli.datasets import Datasets
14
- from cloudos_cli.utils.resources import ssl_selector, format_bytes
15
14
  from rich.console import Console
16
15
  from rich.table import Table
16
+ from cloudos_cli.datasets import Datasets
17
+ from cloudos_cli.utils.resources import ssl_selector, format_bytes
17
18
  from rich.style import Style
18
19
 
19
20
 
@@ -59,6 +60,7 @@ def run_cloudos_cli(ctx):
59
60
  'abort': shared_config,
60
61
  'status': shared_config,
61
62
  'list': shared_config,
63
+ 'details': shared_config
62
64
  },
63
65
  'workflow': {
64
66
  'list': shared_config,
@@ -79,7 +81,8 @@ def run_cloudos_cli(ctx):
79
81
  'job': shared_config
80
82
  },
81
83
  'datasets': {
82
- 'ls': shared_config
84
+ 'ls': shared_config,
85
+ 'mv': shared_config
83
86
  }
84
87
  })
85
88
  else:
@@ -100,6 +103,7 @@ def run_cloudos_cli(ctx):
100
103
  'abort': shared_config,
101
104
  'status': shared_config,
102
105
  'list': shared_config,
106
+ 'details': shared_config
103
107
  },
104
108
  'workflow': {
105
109
  'list': shared_config,
@@ -120,7 +124,8 @@ def run_cloudos_cli(ctx):
120
124
  'job': shared_config
121
125
  },
122
126
  'datasets': {
123
- 'ls': shared_config
127
+ 'ls': shared_config,
128
+ 'mv': shared_config
124
129
  }
125
130
  })
126
131
 
@@ -284,7 +289,7 @@ def configure(ctx, profile, make_default):
284
289
  '--cromwell-token',
285
290
  help=('Specific Cromwell server authentication token. Currently, not necessary ' +
286
291
  'as apikey can be used instead, but maintained for backwards compatibility.'))
287
- @click.option('--repository-platform',
292
+ @click.option('--repository-platform', type=click.Choice(["github", "gitlab", "bitbucketServer"]),
288
293
  help='Name of the repository platform of the workflow. Default=github.',
289
294
  default='github')
290
295
  @click.option('--execution-platform',
@@ -696,6 +701,222 @@ def job_status(ctx,
696
701
  'or repeat the command you just used.')
697
702
 
698
703
 
704
+ @job.command('details')
705
+ @click.option('-k',
706
+ '--apikey',
707
+ help='Your CloudOS API key',
708
+ required=True)
709
+ @click.option('-c',
710
+ '--cloudos-url',
711
+ help=(f'The CloudOS url you are trying to access to. Default={CLOUDOS_URL}.'),
712
+ default=CLOUDOS_URL)
713
+ @click.option('--job-id',
714
+ help='The job id in CloudOS to search for.',
715
+ required=True)
716
+ @click.option('--output-format',
717
+ help='The desired display for the output, either directly in standard output or saved as file. Default=stdout.',
718
+ type=click.Choice(['stdout', 'json'], case_sensitive=False),
719
+ default='stdout')
720
+ @click.option('--output-basename',
721
+ help=('Output file base name to save jobs details. ' +
722
+ 'Default=job_details'),
723
+ default='job_details',
724
+ required=False)
725
+ @click.option('--parameters',
726
+ help=('Whether to generate a ".config" file that can be used as input for --job-config parameter. ' +
727
+ 'It will have the same basename as defined in "--output-basename". '),
728
+ is_flag=True)
729
+ @click.option('--verbose',
730
+ help='Whether to print information messages or not.',
731
+ is_flag=True)
732
+ @click.option('--disable-ssl-verification',
733
+ help=('Disable SSL certificate verification. Please, remember that this option is ' +
734
+ 'not generally recommended for security reasons.'),
735
+ is_flag=True)
736
+ @click.option('--ssl-cert',
737
+ help='Path to your SSL certificate file.')
738
+ @click.option('--profile', help='Profile to use from the config file', default=None)
739
+ @click.pass_context
740
+ def job_details(ctx,
741
+ apikey,
742
+ cloudos_url,
743
+ job_id,
744
+ output_format,
745
+ output_basename,
746
+ parameters,
747
+ verbose,
748
+ disable_ssl_verification,
749
+ ssl_cert,
750
+ profile):
751
+ """Retrieve job details in CloudOS."""
752
+ profile = profile or ctx.default_map['job']['details']['profile']
753
+ # Create a dictionary with required and non-required params
754
+ required_dict = {
755
+ 'apikey': True,
756
+ 'workspace_id': False,
757
+ 'workflow_name': False,
758
+ 'project_name': False
759
+ }
760
+ # determine if the user provided all required parameters
761
+ config_manager = ConfigurationProfile()
762
+ apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
763
+ config_manager.load_profile_and_validate_data(
764
+ ctx,
765
+ INIT_PROFILE,
766
+ CLOUDOS_URL,
767
+ profile=profile,
768
+ required_dict=required_dict,
769
+ apikey=apikey,
770
+ cloudos_url=cloudos_url
771
+ )
772
+ )
773
+
774
+ print('Executing details...')
775
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
776
+ if verbose:
777
+ print('\t...Preparing objects')
778
+ cl = Cloudos(cloudos_url, apikey, None)
779
+ if verbose:
780
+ print('\tThe following Cloudos object was created:')
781
+ print('\t' + str(cl) + '\n')
782
+ print(f'\tSearching for job id: {job_id}')
783
+
784
+ # check if the API gives a 403 error/forbidden error
785
+ try:
786
+ j_details = cl.get_job_status(job_id, verify_ssl)
787
+ except BadRequestException as e:
788
+ if '403' in str(e) or 'Forbidden' in str(e):
789
+ print("[Error] API can only show job details of your own jobs, cannot see other user's job details.")
790
+ sys.exit(1)
791
+ j_details_h = json.loads(j_details.content)
792
+
793
+ # Check if the job details contain parameters
794
+ if j_details_h["parameters"] != []:
795
+ param_kind_map = {
796
+ 'textValue': 'textValue',
797
+ 'arrayFileColumn': 'columnName',
798
+ 'globPattern': 'globPattern',
799
+ 'lustreFileSystem': 'fileSystem',
800
+ }
801
+ # there are different types of parameters, arrayFileColumn, globPattern, lustreFileSystem
802
+ # get first the type of parameter, then the value based on the parameter kind
803
+ concats = []
804
+ for param in j_details_h["parameters"]:
805
+ if param['parameterKind'] == 'dataItem':
806
+ # For dataItem, we need to use specific nested keys
807
+ concats.append(f"{param['prefix']}{param['name']}={param['dataItem']['item']['name']}")
808
+ else:
809
+ # For other parameter kinds, we use the appropriate key from param_kind_map
810
+ concats.append(f"{param['prefix']}{param['name']}={param[param_kind_map[param['parameterKind']]]}")
811
+ concat_string = '\n'.join(concats)
812
+ # If the user requested to save the parameters in a config file
813
+ if parameters:
814
+ # Create a config file with the parameters
815
+ config_filename = f"{output_basename}.config"
816
+ with open(config_filename, 'w') as config_file:
817
+ config_file.write("params {\n")
818
+ for param in j_details_h["parameters"]:
819
+ config_file.write(f"\t{param['name']} = {param['textValue']}\n")
820
+ config_file.write("}\n")
821
+ print(f"\tJob parameters have been saved to '{config_filename}'")
822
+ else:
823
+ concat_string = 'No parameters provided'
824
+ if parameters:
825
+ print("\tNo parameters found in the job details, no config file will be created.")
826
+
827
+ # Determine the execution platform based on jobType
828
+ executors = {
829
+ 'nextflowAWS':'Batch AWS',
830
+ 'nextflowAzure': 'Batch Azure',
831
+ 'nextflowGcp': 'GCP',
832
+ 'nextflowHpc': 'HPC',
833
+ 'nextflowKubernetes': 'Kubernetes',
834
+ 'dockerAWS': 'Batch AWS',
835
+ 'cromwellAWS': 'Batch AWS'
836
+ }
837
+ execution_platform = executors.get(j_details_h["jobType"], "None")
838
+
839
+ # revision
840
+ if j_details_h["jobType"] == "dockerAWS":
841
+ revision = j_details_h["revision"]["digest"]
842
+ else:
843
+ revision = j_details_h["revision"]["commit"]
844
+
845
+ # Output the job details
846
+ if output_format == 'stdout':
847
+ console = Console()
848
+ table = Table(title="Job Details")
849
+
850
+ table.add_column("Field", style="cyan", no_wrap=True)
851
+ table.add_column("Value", style="magenta", overflow="fold")
852
+
853
+ table.add_row("Job Status", str(j_details_h["status"]))
854
+ table.add_row("Parameters", concat_string)
855
+ if j_details_h["jobType"] == "dockerAWS":
856
+ table.add_row("Command", str(j_details_h["command"]))
857
+ table.add_row("Revision", str(revision))
858
+ table.add_row("Nextflow Version", str(j_details_h.get("nextflowVersion", "None")))
859
+ table.add_row("Execution Platform", execution_platform)
860
+ table.add_row("Profile", str(j_details_h.get("profile", "None")))
861
+ table.add_row("Master Instance", str(j_details_h["masterInstance"]["usedInstance"]["type"]))
862
+ if j_details_h["jobType"] == "nextflowAzure":
863
+ try:
864
+ table.add_row("Worker Node", str(j_details_h["azureBatch"]["vmType"]))
865
+ except KeyError:
866
+ table.add_row("Worker Node", "Not Specified")
867
+ table.add_row("Storage", str(j_details_h["storageSizeInGb"]) + " GB")
868
+ if j_details_h["jobType"] != "nextflowAzure":
869
+ try:
870
+ table.add_row("Job Queue ID", str(j_details_h["batch"]["jobQueue"]["name"]))
871
+ table.add_row("Job Queue Name", str(j_details_h["batch"]["jobQueue"]["label"]))
872
+ except KeyError:
873
+ table.add_row("Job Queue", "Master Node")
874
+ table.add_row("Accelerated File Staging", str(j_details_h.get("usesFusionFileSystem", "None")))
875
+ table.add_row("Task Resources", f"{str(j_details_h['resourceRequirements']['cpu'])} CPUs, " +
876
+ f"{str(j_details_h['resourceRequirements']['ram'])} GB RAM")
877
+
878
+ console.print(table)
879
+ else:
880
+ # Create a JSON object with the key-value pairs
881
+ job_details_json = {
882
+ "Job Status": str(j_details_h["status"]),
883
+ "Parameters": ','.join(concat_string.split()),
884
+ "Revision": str(revision),
885
+ "Nextflow Version": str(j_details_h.get("nextflowVersion", "None")),
886
+ "Execution Platform": execution_platform,
887
+ "Profile": str(j_details_h.get("profile", "None")),
888
+ "Master Instance": str(j_details_h["masterInstance"]["usedInstance"]["type"]),
889
+ "Storage": str(j_details_h["storageSizeInGb"]) + " GB",
890
+ "Accelerated File Staging": str(j_details_h.get("usesFusionFileSystem", "None")),
891
+ "Task Resources": f"{str(j_details_h['resourceRequirements']['cpu'])} CPUs, " + \
892
+ f"{str(j_details_h['resourceRequirements']['ram'])} GB RAM"
893
+
894
+ }
895
+
896
+ # Conditionally add the "Command" key if the jobType is "dockerAWS"
897
+ if j_details_h["jobType"] == "dockerAWS":
898
+ job_details_json["Command"] = str(j_details_h["command"])
899
+
900
+ # Conditionally add the "Job Queue" key if the jobType is not "nextflowAzure"
901
+ if j_details_h["jobType"] != "nextflowAzure":
902
+ try:
903
+ job_details_json["Job Queue ID"] = str(j_details_h["batch"]["jobQueue"]["name"])
904
+ job_details_json["Job Queue Name"] = str(j_details_h["batch"]["jobQueue"]["label"])
905
+ except KeyError:
906
+ job_details_json["Job Queue"] = "Master Node"
907
+
908
+ if j_details_h["jobType"] == "nextflowAzure":
909
+ try:
910
+ job_details_json["Worker Node"] = str(j_details_h["azureBatch"]["vmType"])
911
+ except KeyError:
912
+ job_details_json["Worker Node"] = "Not Specified"
913
+
914
+ # Write the JSON object to a file
915
+ with open(f"{output_basename}.json", "w") as json_file:
916
+ json.dump(job_details_json, json_file, indent=4, ensure_ascii=False)
917
+ print(f"\tJob details have been saved to '{output_basename}.json'")
918
+
919
+
699
920
  @job.command('list')
700
921
  @click.option('-k',
701
922
  '--apikey',
@@ -1029,10 +1250,9 @@ def list_workflows(ctx,
1029
1250
  @click.option('--workspace-id',
1030
1251
  help='The specific CloudOS workspace id.',
1031
1252
  required=True)
1032
- @click.option("--platform", type=click.Choice(["github", "gitlab"]),
1033
- help=('Repository service where the workflow is located. Valid choices: github, gitlab. ' +
1034
- 'Default=github'),
1035
- default="github")
1253
+ @click.option('--repository-platform', type=click.Choice(["github", "gitlab", "bitbucketServer"]),
1254
+ help='Name of the repository platform of the workflow. Default=github.',
1255
+ default='github')
1036
1256
  @click.option("--workflow-name", help="The name that the workflow will have in CloudOS.", required=True)
1037
1257
  @click.option("-w", "--workflow-url", help="URL of the workflow repository.", required=True)
1038
1258
  @click.option("-d", "--workflow-docs-link", help="URL to the documentation of the workflow.", default='')
@@ -1055,7 +1275,7 @@ def import_wf(ctx,
1055
1275
  workflow_docs_link,
1056
1276
  cost_limit,
1057
1277
  workflow_description,
1058
- platform,
1278
+ repository_platform,
1059
1279
  disable_ssl_verification,
1060
1280
  ssl_cert,
1061
1281
  profile):
@@ -1082,15 +1302,14 @@ def import_wf(ctx,
1082
1302
  apikey=apikey,
1083
1303
  cloudos_url=cloudos_url,
1084
1304
  workspace_id=workspace_id,
1085
- workflow_name=workflow_name
1305
+ workflow_name=workflow_name,
1306
+ repository_platform=repository_platform
1086
1307
  )
1087
1308
  )
1088
1309
 
1089
1310
  verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
1090
- repo_services = {"gitlab": ImportGitlab, "github": ImportGithub}
1091
- repo_cls = repo_services[platform]
1092
- repo_import = repo_cls(cloudos_url=cloudos_url, cloudos_apikey=apikey, workspace_id=workspace_id,
1093
- platform=platform, workflow_name=workflow_name, workflow_url=workflow_url,
1311
+ repo_import = ImportWorflow(cloudos_url=cloudos_url, cloudos_apikey=apikey, workspace_id=workspace_id,
1312
+ platform=repository_platform, workflow_name=workflow_name, workflow_url=workflow_url,
1094
1313
  workflow_docs_link=workflow_docs_link, cost_limit=cost_limit, workflow_description=workflow_description, verify=verify_ssl)
1095
1314
  workflow_id = repo_import.import_workflow()
1096
1315
  print(f'\tWorkflow {workflow_name} was imported successfully with the ' +
@@ -1644,7 +1863,7 @@ def remove_profile(ctx, profile):
1644
1863
  help=('Max time to wait (in seconds) to job completion. ' +
1645
1864
  'Default=3600.'),
1646
1865
  default=3600)
1647
- @click.option('--repository-platform',
1866
+ @click.option('--repository-platform', type=click.Choice(["github", "gitlab", "bitbucketServer"]),
1648
1867
  help='Name of the repository platform of the workflow. Default=github.',
1649
1868
  default='github')
1650
1869
  @click.option('--execution-platform',
@@ -1942,7 +2161,8 @@ def list_files(ctx,
1942
2161
  console = Console()
1943
2162
  for item in contents:
1944
2163
  name = item.get("name", "")
1945
- if item.get("isDir"):
2164
+ is_folder = item.get("folderType") or item.get("isDir")
2165
+ if is_folder:
1946
2166
  console.print(f"[blue underline]{name}[/]")
1947
2167
  else:
1948
2168
  console.print(name)
@@ -1950,5 +2170,154 @@ def list_files(ctx,
1950
2170
  except Exception as e:
1951
2171
  click.echo(f"[ERROR] {str(e)}", err=True)
1952
2172
 
2173
+
2174
+ @datasets.command(name="mv")
2175
+ @click.argument("source_path", required=True)
2176
+ @click.argument("destination_path", required=True)
2177
+ @click.option('-k', '--apikey', required=True, help='Your CloudOS API key.')
2178
+ @click.option('-c', '--cloudos-url', default=CLOUDOS_URL, required=False, help='The CloudOS URL.')
2179
+ @click.option('--workspace-id', required=True, help='The CloudOS workspace ID.')
2180
+ @click.option('--project-name', required=True, help='The source project name.')
2181
+ @click.option('--destination-project-name', required=False, help='The destination project name. Defaults to the source project.')
2182
+ @click.option('--disable-ssl-verification', is_flag=True, help='Disable SSL certificate verification.')
2183
+ @click.option('--ssl-cert', help='Path to your SSL certificate file.')
2184
+ @click.option('--profile', default=None, help='Profile to use from the config file.')
2185
+ @click.pass_context
2186
+ def move_files(ctx, source_path, destination_path, apikey, cloudos_url, workspace_id,
2187
+ project_name, destination_project_name,
2188
+ disable_ssl_verification, ssl_cert, profile):
2189
+ """
2190
+ Move a file or folder from a source path to a destination path within or across CloudOS projects.
2191
+
2192
+ SOURCE_PATH [path] : the full path to the file or folder to move. It must be a 'Data' folder path. E.g.: 'Data/folderA/file.txt'\n
2193
+ DESTINATION_PATH [path]: the full path to the destination folder. It must be a 'Data' folder path. E.g.: 'Data/folderB'
2194
+ """
2195
+
2196
+ profile = profile or ctx.default_map['datasets']['move'].get('profile')
2197
+ destination_project_name = destination_project_name or project_name
2198
+
2199
+ # Validate destination constraint
2200
+ if not destination_path.strip("/").startswith("Data/") and destination_path.strip("/") != "Data":
2201
+ click.echo("[ERROR] Destination path must begin with 'Data/' or be 'Data'.", err=True)
2202
+ sys.exit(1)
2203
+ if not source_path.strip("/").startswith("Data/") and source_path.strip("/") != "Data":
2204
+ click.echo("[ERROR] SOURCE_PATH must start with 'Data/' or be 'Data'.", err=True)
2205
+ sys.exit(1)
2206
+ click.echo('Loading configuration profile')
2207
+ # Load configuration profile
2208
+ config_manager = ConfigurationProfile()
2209
+ required_dict = {
2210
+ 'apikey': True,
2211
+ 'workspace_id': True,
2212
+ 'workflow_name': False,
2213
+ 'project_name': True
2214
+ }
2215
+
2216
+ apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
2217
+ config_manager.load_profile_and_validate_data(
2218
+ ctx,
2219
+ INIT_PROFILE,
2220
+ CLOUDOS_URL,
2221
+ profile=profile,
2222
+ required_dict=required_dict,
2223
+ apikey=apikey,
2224
+ cloudos_url=cloudos_url,
2225
+ workspace_id=workspace_id,
2226
+ workflow_name=None,
2227
+ repository_platform=None,
2228
+ execution_platform=None,
2229
+ project_name=project_name
2230
+ )
2231
+ )
2232
+
2233
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
2234
+ # Initialize Datasets clients
2235
+ source_client = Datasets(
2236
+ cloudos_url=cloudos_url,
2237
+ apikey=apikey,
2238
+ workspace_id=workspace_id,
2239
+ project_name=project_name,
2240
+ verify=verify_ssl,
2241
+ cromwell_token=None
2242
+ )
2243
+
2244
+ dest_client = Datasets(
2245
+ cloudos_url=cloudos_url,
2246
+ apikey=apikey,
2247
+ workspace_id=workspace_id,
2248
+ project_name=destination_project_name,
2249
+ verify=verify_ssl,
2250
+ cromwell_token=None
2251
+ )
2252
+ click.echo('Checking source path')
2253
+ # === Resolve Source Item ===
2254
+ source_parts = source_path.strip("/").split("/")
2255
+ source_parent_path = "/".join(source_parts[:-1]) if len(source_parts) > 1 else None
2256
+ source_item_name = source_parts[-1]
2257
+
2258
+ try:
2259
+ source_contents = source_client.list_folder_content(source_parent_path)
2260
+ except Exception as e:
2261
+ click.echo(f"[ERROR] Could not resolve source path '{source_path}': {str(e)}", err=True)
2262
+ sys.exit(1)
2263
+
2264
+ found_source = None
2265
+ for collection in ["files", "folders"]:
2266
+ for item in source_contents.get(collection, []):
2267
+ if item.get("name") == source_item_name:
2268
+ found_source = item
2269
+ break
2270
+ if found_source:
2271
+ break
2272
+ if not found_source:
2273
+ click.echo(f"[ERROR] Item '{source_item_name}' not found in '{source_parent_path or '[project root]'}'", err=True)
2274
+ sys.exit(1)
2275
+
2276
+ source_id = found_source["_id"]
2277
+ source_kind = "Folder" if "folderType" in found_source else "File"
2278
+ click.echo("Checking destination path")
2279
+ # === Resolve Destination Folder ===
2280
+ dest_parts = destination_path.strip("/").split("/")
2281
+ dest_folder_name = dest_parts[-1]
2282
+ dest_parent_path = "/".join(dest_parts[:-1]) if len(dest_parts) > 1 else None
2283
+
2284
+ try:
2285
+ dest_contents = dest_client.list_folder_content(dest_parent_path)
2286
+ match = next((f for f in dest_contents.get("folders", []) if f.get("name") == dest_folder_name), None)
2287
+ if not match:
2288
+ raise ValueError(f"Could not resolve destination folder '{destination_path}'")
2289
+
2290
+ target_id = match["_id"]
2291
+ folder_type = match.get("folderType")
2292
+ # Normalize kind: top-level datasets are kind=Dataset, all other folders are kind=Folder
2293
+ if folder_type in ("VirtualFolder", "S3Folder", "Folder"):
2294
+ target_kind = "Folder"
2295
+ elif isinstance(folder_type, bool) and folder_type: # legacy dataset structure
2296
+ target_kind = "Dataset"
2297
+ else:
2298
+ raise ValueError(f"Unrecognized folderType '{folder_type}' for destination '{destination_path}'")
2299
+
2300
+ except Exception as e:
2301
+ click.echo(f"[ERROR] Could not resolve destination path '{destination_path}': {str(e)}", err=True)
2302
+ sys.exit(1)
2303
+ click.echo(f"Moving {source_kind} '{source_item_name}' to '{destination_path}' in project '{destination_project_name} ...")
2304
+ # === Perform Move ===
2305
+ try:
2306
+ response = source_client.move_files_and_folders(
2307
+ source_id=source_id,
2308
+ source_kind=source_kind,
2309
+ target_id=target_id,
2310
+ target_kind=target_kind
2311
+ )
2312
+ if response.ok:
2313
+ click.secho(f"[SUCCESS] {source_kind} '{source_item_name}' moved to '{destination_path}' in project '{destination_project_name}'.", fg="green", bold=True)
2314
+ else:
2315
+ click.echo(f"[ERROR] Move failed: {response.status_code} - {response.text}", err=True)
2316
+ sys.exit(1)
2317
+ except Exception as e:
2318
+ click.echo(f"[ERROR] Move operation failed: {str(e)}", err=True)
2319
+ sys.exit(1)
2320
+
2321
+
1953
2322
  if __name__ == "__main__":
1954
2323
  run_cloudos_cli()
@@ -0,0 +1 @@
1
+ __version__ = '2.30.0'
@@ -5,8 +5,8 @@ This is the main class for file explorer (datasets).
5
5
  from dataclasses import dataclass
6
6
  from typing import Union
7
7
  from cloudos_cli.clos import Cloudos
8
- from cloudos_cli.utils.requests import retry_requests_get
9
-
8
+ from cloudos_cli.utils.requests import retry_requests_get, retry_requests_put
9
+ import json
10
10
 
11
11
  @dataclass
12
12
  class Datasets(Cloudos):
@@ -151,11 +151,15 @@ class Datasets(Cloudos):
151
151
  self.workspace_id),
152
152
  headers=headers, verify=self.verify)
153
153
  raw = r.json()
154
-
154
+ datasets = raw.get("datasets", [])
155
155
  # Normalize response
156
- for item in raw.get("datasets", []):
156
+ for item in datasets:
157
157
  item["folderType"] = True
158
- return raw
158
+ response ={
159
+ "folders": datasets,
160
+ "files": []
161
+ }
162
+ return response
159
163
 
160
164
  def list_datasets_content(self, folder_name):
161
165
  """Uses
@@ -182,7 +186,7 @@ class Datasets(Cloudos):
182
186
  if folder_name == 'AnalysesResults':
183
187
  folder_name = 'Analyses Results'
184
188
 
185
- for folder in pro_fol.get("datasets", []):
189
+ for folder in pro_fol.get("folders", []):
186
190
  if folder['name'] == folder_name:
187
191
  folder_id = folder['_id']
188
192
  if not folder_id:
@@ -262,6 +266,7 @@ class Datasets(Cloudos):
262
266
  self.workspace_id),
263
267
  headers=headers, verify=self.verify)
264
268
  return r.json()
269
+
265
270
  def list_folder_content(self, path=None):
266
271
  """
267
272
  Wrapper to list contents of a CloudOS folder.
@@ -325,4 +330,40 @@ class Datasets(Cloudos):
325
330
  if not found:
326
331
  raise ValueError(f"Folder '{job_name}' not found under dataset '{dataset_name}'")
327
332
 
328
- return folder_content
333
+ return folder_content
334
+
335
+ def move_files_and_folders(self, source_id: str, source_kind: str, target_id: str, target_kind: str):
336
+ """
337
+ Move a file to another dataset in CloudOS.
338
+
339
+ Parameters
340
+ ----------
341
+ file_id : str
342
+ The ID of the file to move.
343
+
344
+ target_dataset_id : str
345
+ The ID of the target dataset to move the file into.
346
+
347
+ Returns
348
+ -------
349
+ response : requests.Response
350
+ The response object from the CloudOS API.
351
+ """
352
+ url = f"{self.cloudos_url}/api/v1/dataItems/move?teamId={self.workspace_id}"
353
+ headers = {
354
+ "accept": "application/json",
355
+ "content-type": "application/json",
356
+ "ApiKey": self.apikey
357
+ }
358
+ payload = {
359
+ "dataItemToMove": {
360
+ "kind": source_kind,
361
+ "item": source_id
362
+ },
363
+ "toDataItemParent": {
364
+ "kind": target_kind,
365
+ "item": target_id
366
+ }
367
+ }
368
+ response = retry_requests_put(url, headers=headers, data=json.dumps(payload), verify=self.verify)
369
+ return response
@@ -2,7 +2,7 @@
2
2
  Functions and classes related to importing workflows into CloudOS.
3
3
  """
4
4
 
5
- from .import_wf import WFImport, ImportGitlab, ImportGithub
5
+ from .import_wf import WFImport, ImportWorflow
6
6
 
7
7
 
8
8
  __all__ = ['import_wf']
@@ -3,6 +3,8 @@ from urllib.parse import urlsplit
3
3
  from cloudos_cli.utils.errors import BadRequestException, AccountNotLinkedException
4
4
  from cloudos_cli.utils.requests import retry_requests_post, retry_requests_get
5
5
  import json
6
+ from requests.exceptions import RetryError
7
+ import sys
6
8
 
7
9
 
8
10
  class WFImport(ABC):
@@ -96,44 +98,46 @@ class WFImport(ABC):
96
98
  return content["_id"]
97
99
 
98
100
 
99
- # There are some duplicated lines here and on the github subclass. I did not put them in the abstract class because we
100
- # still don't know if the bitbucket data will come the same. If it does, then I will put as much as possible as part
101
- # of the abstract class
102
- class ImportGitlab(WFImport):
101
+ class ImportWorflow(WFImport):
103
102
  def get_repo(self):
104
- get_repo_url = f"{self.cloudos_url}/api/v1/git/gitlab/getPublicRepo"
105
- self.repo_name = self.parsed_url.path.split("/")[-1]
106
- self.repo_owner = "/".join(self.parsed_url.path.split("/")[1:-1])
103
+ get_repo_url = f"{self.cloudos_url}/api/v1/git/{self.platform}/getPublicRepo"
104
+ if self.platform == "bitbucketServer":
105
+ # platform allows to add paths like /browse, so we need to check if the path ends with it
106
+ if self.parsed_url.path.endswith("browse"):
107
+ self.repo_name = self.parsed_url.path.split("/")[-2]
108
+ else:
109
+ self.repo_name = self.parsed_url.path.split("/")[-1]
110
+ self.repo_owner = self.parsed_url.path.split("/")[2]
111
+ else:
112
+ self.repo_name = self.parsed_url.path.split("/")[-1]
113
+ self.repo_owner = "/".join(self.parsed_url.path.split("/")[1:-1])
107
114
  self.repo_host = f"{self.parsed_url.scheme}://{self.parsed_url.netloc}"
108
115
  get_repo_params = dict(repoName=self.repo_name, repoOwner=self.repo_owner, host=self.repo_host, teamId=self.workspace_id)
109
- r = retry_requests_get(get_repo_url, params=get_repo_params, headers=self.headers)
110
- if r.status_code == 404:
116
+ try:
117
+ r = retry_requests_get(get_repo_url, params=get_repo_params, headers=self.headers)
118
+ except RetryError as e:
119
+ # RetryError getting from missing BitBucket Server credentials
111
120
  raise AccountNotLinkedException(self.workflow_url)
112
- elif r.status_code >= 400:
113
- raise BadRequestException(r)
114
- r_data = r.json()
115
- self.payload["repository"]["repositoryId"] = r_data["id"]
116
- self.payload["repository"]["name"] = r_data["name"]
117
- self.payload["repository"]["owner"]["id"] = r_data["namespace"]["id"]
118
- self.payload["repository"]["owner"]["login"] = r_data["namespace"]["full_path"]
119
- self.payload["mainFile"] = self.main_file or self.get_repo_main_file()
120
121
 
121
-
122
- class ImportGithub(WFImport):
123
- def get_repo(self):
124
- get_repo_url = f"{self.cloudos_url}/api/v1/git/github/getPublicRepo"
125
- self.repo_name = self.parsed_url.path.split("/")[-1]
126
- self.repo_owner = "/".join(self.parsed_url.path.split("/")[1:-1])
127
- self.repo_host = f"{self.parsed_url.scheme}://{self.parsed_url.netloc}"
128
- get_repo_params = dict(repoName=self.repo_name, repoOwner=self.repo_owner, host=self.repo_host, teamId=self.workspace_id)
129
- r = retry_requests_get(get_repo_url, params=get_repo_params, headers=self.headers)
122
+ # for Github and Gitlab the API gives very general errors on missing credentials
123
+ # therefore we only have these at the moment
130
124
  if r.status_code == 404:
131
125
  raise AccountNotLinkedException(self.workflow_url)
132
126
  elif r.status_code >= 400:
133
127
  raise BadRequestException(r)
128
+
134
129
  r_data = r.json()
135
- self.payload["repository"]["repositoryId"] = r_data["id"]
130
+ if self.platform == "bitbucketServer":
131
+ self.payload["repository"]["repositoryId"] = r_data["name"]
132
+ else:
133
+ self.payload["repository"]["repositoryId"] = r_data["id"]
136
134
  self.payload["repository"]["name"] = r_data["name"]
137
- self.payload["repository"]["owner"]["id"] = r_data["owner"]["id"]
138
- self.payload["repository"]["owner"]["login"] = r_data["owner"]["login"]
135
+ owner_data = {
136
+ "bitbucketServer": ("project", "id", "key"),
137
+ "gitlab": ("namespace", "id", "full_path"),
138
+ "github": ("owner", "id", "login")
139
+ }
140
+ key, id_field, login_field = owner_data[self.platform]
141
+ self.payload["repository"]["owner"]["id"] = r_data[key][id_field]
142
+ self.payload["repository"]["owner"]["login"] = r_data[key][login_field]
139
143
  self.payload["mainFile"] = self.main_file or self.get_repo_main_file()
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.27.0
3
+ Version: 2.30.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -535,6 +535,69 @@ Executing status...
535
535
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
536
536
  ```
537
537
 
538
+ #### Check job details
539
+
540
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
541
+
542
+ For example, with explicit variable for required parameters:
543
+
544
+ ```bash
545
+ cloudos job details \
546
+ --apikey $MY_API_KEY \
547
+ --job-id 62c83a1191fe06013b7ef355
548
+ ```
549
+
550
+ Or with a defined profile:
551
+
552
+ ```bash
553
+ cloudos job details \
554
+ --profile job-details \
555
+ --job-id 62c83a1191fe06013b7ef355
556
+ ```
557
+
558
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
559
+
560
+ ```console
561
+ Executing details...
562
+ Job Details
563
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
564
+ ┃ Field ┃ Value ┃
565
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
566
+ │ Parameters │ -test=value │
567
+ │ │ --gaq=test │
568
+ │ │ cryo=yes │
569
+ │ Command │ echo 'test' > new_file.txt │
570
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
571
+ │ Nextflow Version │ None │
572
+ │ Execution Platform │ Batch AWS │
573
+ │ Profile │ None │
574
+ │ Master Instance │ c5.xlarge │
575
+ │ Storage │ 500 │
576
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
577
+ │ Accelerated File Staging │ None │
578
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
579
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
580
+ ```
581
+
582
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
583
+
584
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
585
+
586
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
587
+
588
+ ```console
589
+ params {
590
+ parameter_one = value_one
591
+ parameter_two = value_two
592
+ parameter_three = value_three
593
+ }
594
+ ```
595
+
596
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
597
+
598
+ > [!NOTE]
599
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
600
+
538
601
  #### Get a list of your jobs from a CloudOS workspace
539
602
 
540
603
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -632,8 +695,8 @@ The collected workflows are those that can be found in "WORKSPACE TOOLS" section
632
695
  You can import new workflows to your CloudOS workspaces. The only requirements are:
633
696
 
634
697
  - The workflow is a Nextflow pipeline.
635
- - The workflow repository is located at GitHub or GitLab (specified by the option `--platform`. Available options: `github`, `gitlab`)
636
- - If your repository is private, you have access to the repository and you have linked your GitHub or Bitbucket server accounts to CloudOS.
698
+ - The workflow repository is located at GitHub, GitLab or BitBucket Server (specified by the option `--repository-platform`. Available options: `github`, `gitlab` and `bitbucketServer`)
699
+ - If your repository is private, you have access to the repository and to have linked your GitHub, Gitlab or Bitbucket server accounts to CloudOS.
637
700
 
638
701
  #### Usage of the workflow import command
639
702
 
@@ -649,7 +712,7 @@ cloudos workflow import \
649
712
  --workspace-id $WORKSPACE_ID \
650
713
  --workflow-url $WORKFLOW_URL \
651
714
  --workflow-name "new_name_for_the_github_workflow" \
652
- --platform github
715
+ --repository-platform github
653
716
  ```
654
717
 
655
718
  The expected output will be:
@@ -674,7 +737,7 @@ cloudos workflow import \
674
737
  --workflow-url $WORKFLOW_URL \
675
738
  --workflow-name "new_name_for_the_github_workflow" \
676
739
  --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" \
677
- --platform github
740
+ --repository-platform github
678
741
  ```
679
742
 
680
743
  > NOTE: please, take into account that importing workflows using cloudos-cli is not yet available in all the CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
@@ -771,6 +834,36 @@ If you require more information on the files and folder listed, you can use the
771
834
  - Filepath (the file or folder name)
772
835
  - S3 Path
773
836
 
837
+ ##### Moving files
838
+
839
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
840
+
841
+ 1. The move can happen **within the same project** running the following command:
842
+ ```
843
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
844
+ ```
845
+ where the source project as well as the destination one is the one defined in the profile.
846
+
847
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
848
+ ```
849
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
850
+ ```
851
+ In this case, only the source project is the one specified in the profile.
852
+
853
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
854
+
855
+ ```
856
+ cloudos datasets mv Data/results/my_plot.png Data/plots
857
+ ```
858
+
859
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
860
+ ```bash
861
+ --cloudos-url $CLOUDOS \
862
+ --apikey $MY_API_KEY \
863
+ --workspace-id $WORKSPACE_ID \
864
+ --project-name $PROJEC_NAME
865
+ ```
866
+
774
867
  ### WDL pipeline support
775
868
 
776
869
  #### Cromwell server managing
@@ -1 +0,0 @@
1
- __version__ = '2.27.0'
File without changes
File without changes
File without changes