cloudos-cli 2.26.1__tar.gz → 2.29.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/PKG-INFO +104 -1
  2. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/README.md +103 -0
  3. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/__main__.py +443 -50
  4. cloudos_cli-2.29.0/cloudos_cli/_version.py +1 -0
  5. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/datasets/datasets.py +52 -5
  6. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/utils/__init__.py +2 -2
  7. cloudos_cli-2.29.0/cloudos_cli/utils/resources.py +46 -0
  8. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/PKG-INFO +104 -1
  9. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/SOURCES.txt +1 -0
  10. cloudos_cli-2.26.1/cloudos_cli/_version.py +0 -1
  11. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/LICENSE +0 -0
  12. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/__init__.py +0 -0
  13. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/clos.py +0 -0
  14. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/configure/__init__.py +0 -0
  15. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/configure/configure.py +0 -0
  16. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/datasets/__init__.py +0 -0
  17. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/import_wf/__init__.py +0 -0
  18. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/import_wf/import_wf.py +0 -0
  19. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/jobs/__init__.py +0 -0
  20. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/jobs/job.py +0 -0
  21. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/queue/__init__.py +0 -0
  22. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/queue/queue.py +0 -0
  23. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/utils/errors.py +0 -0
  24. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli/utils/requests.py +0 -0
  25. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/dependency_links.txt +0 -0
  26. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/entry_points.txt +0 -0
  27. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/requires.txt +0 -0
  28. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/cloudos_cli.egg-info/top_level.txt +0 -0
  29. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/setup.cfg +0 -0
  30. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/setup.py +0 -0
  31. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/tests/__init__.py +0 -0
  32. {cloudos_cli-2.26.1 → cloudos_cli-2.29.0}/tests/functions_for_pytest.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.26.1
3
+ Version: 2.29.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -535,6 +535,69 @@ Executing status...
535
535
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
536
536
  ```
537
537
 
538
+ #### Check job details
539
+
540
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
541
+
542
+ For example, with explicit variable for required parameters:
543
+
544
+ ```bash
545
+ cloudos job details \
546
+ --apikey $MY_API_KEY \
547
+ --job-id 62c83a1191fe06013b7ef355
548
+ ```
549
+
550
+ Or with a defined profile:
551
+
552
+ ```bash
553
+ cloudos job details \
554
+ --profile job-details \
555
+ --job-id 62c83a1191fe06013b7ef355
556
+ ```
557
+
558
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
559
+
560
+ ```console
561
+ Executing details...
562
+ Job Details
563
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
564
+ ┃ Field ┃ Value ┃
565
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
566
+ │ Parameters │ -test=value │
567
+ │ │ --gaq=test │
568
+ │ │ cryo=yes │
569
+ │ Command │ echo 'test' > new_file.txt │
570
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
571
+ │ Nextflow Version │ None │
572
+ │ Execution Platform │ Batch AWS │
573
+ │ Profile │ None │
574
+ │ Master Instance │ c5.xlarge │
575
+ │ Storage │ 500 │
576
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
577
+ │ Accelerated File Staging │ None │
578
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
579
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
580
+ ```
581
+
582
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
583
+
584
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
585
+
586
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
587
+
588
+ ```console
589
+ params {
590
+ parameter_one = value_one
591
+ parameter_two = value_two
592
+ parameter_three = value_three
593
+ }
594
+ ```
595
+
596
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
597
+
598
+ > [!NOTE]
599
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
600
+
538
601
  #### Get a list of your jobs from a CloudOS workspace
539
602
 
540
603
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -758,9 +821,49 @@ cloudos datasets ls <path> \
758
821
  --workspace-id $WORKSPACE_ID \
759
822
  --project-name $PROJEC_NAME
760
823
  ```
824
+
825
+
761
826
  The output of this command is a list of files and folders present in the specified project.
762
827
  If the `<path>` is left empty, the command will return the list of folders present in the selected project.
763
828
 
829
+ If you require more information on the files and folder listed, you can use the `--details` flag that will output a table containing the following columns:
830
+ - Type (folder or file)
831
+ - Owner
832
+ - Size in human readable format
833
+ - Last updated
834
+ - Filepath (the file or folder name)
835
+ - S3 Path
836
+
837
+ ##### Moving files
838
+
839
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
840
+
841
+ 1. The move can happen **within the same project** running the following command:
842
+ ```
843
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
844
+ ```
845
+ where the source project as well as the destination one is the one defined in the profile.
846
+
847
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
848
+ ```
849
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
850
+ ```
851
+ In this case, only the source project is the one specified in the profile.
852
+
853
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
854
+
855
+ ```
856
+ cloudos datasets mv Data/results/my_plot.png Data/plots
857
+ ```
858
+
859
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
860
+ ```bash
861
+ --cloudos-url $CLOUDOS \
862
+ --apikey $MY_API_KEY \
863
+ --workspace-id $WORKSPACE_ID \
864
+ --project-name $PROJEC_NAME
865
+ ```
866
+
764
867
  ### WDL pipeline support
765
868
 
766
869
  #### Cromwell server managing
@@ -500,6 +500,69 @@ Executing status...
500
500
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
501
501
  ```
502
502
 
503
+ #### Check job details
504
+
505
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
506
+
507
+ For example, with explicit variable for required parameters:
508
+
509
+ ```bash
510
+ cloudos job details \
511
+ --apikey $MY_API_KEY \
512
+ --job-id 62c83a1191fe06013b7ef355
513
+ ```
514
+
515
+ Or with a defined profile:
516
+
517
+ ```bash
518
+ cloudos job details \
519
+ --profile job-details \
520
+ --job-id 62c83a1191fe06013b7ef355
521
+ ```
522
+
523
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
524
+
525
+ ```console
526
+ Executing details...
527
+ Job Details
528
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
529
+ ┃ Field ┃ Value ┃
530
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
531
+ │ Parameters │ -test=value │
532
+ │ │ --gaq=test │
533
+ │ │ cryo=yes │
534
+ │ Command │ echo 'test' > new_file.txt │
535
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
536
+ │ Nextflow Version │ None │
537
+ │ Execution Platform │ Batch AWS │
538
+ │ Profile │ None │
539
+ │ Master Instance │ c5.xlarge │
540
+ │ Storage │ 500 │
541
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
542
+ │ Accelerated File Staging │ None │
543
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
544
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
545
+ ```
546
+
547
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
548
+
549
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
550
+
551
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
552
+
553
+ ```console
554
+ params {
555
+ parameter_one = value_one
556
+ parameter_two = value_two
557
+ parameter_three = value_three
558
+ }
559
+ ```
560
+
561
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
562
+
563
+ > [!NOTE]
564
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
565
+
503
566
  #### Get a list of your jobs from a CloudOS workspace
504
567
 
505
568
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -723,9 +786,49 @@ cloudos datasets ls <path> \
723
786
  --workspace-id $WORKSPACE_ID \
724
787
  --project-name $PROJEC_NAME
725
788
  ```
789
+
790
+
726
791
  The output of this command is a list of files and folders present in the specified project.
727
792
  If the `<path>` is left empty, the command will return the list of folders present in the selected project.
728
793
 
794
+ If you require more information on the files and folder listed, you can use the `--details` flag that will output a table containing the following columns:
795
+ - Type (folder or file)
796
+ - Owner
797
+ - Size in human readable format
798
+ - Last updated
799
+ - Filepath (the file or folder name)
800
+ - S3 Path
801
+
802
+ ##### Moving files
803
+
804
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
805
+
806
+ 1. The move can happen **within the same project** running the following command:
807
+ ```
808
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
809
+ ```
810
+ where the source project as well as the destination one is the one defined in the profile.
811
+
812
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
813
+ ```
814
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
815
+ ```
816
+ In this case, only the source project is the one specified in the profile.
817
+
818
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
819
+
820
+ ```
821
+ cloudos datasets mv Data/results/my_plot.png Data/plots
822
+ ```
823
+
824
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
825
+ ```bash
826
+ --cloudos-url $CLOUDOS \
827
+ --apikey $MY_API_KEY \
828
+ --workspace-id $WORKSPACE_ID \
829
+ --project-name $PROJEC_NAME
830
+ ```
831
+
729
832
  ### WDL pipeline support
730
833
 
731
834
  #### Cromwell server managing
@@ -5,14 +5,18 @@ import cloudos_cli.jobs.job as jb
5
5
  from cloudos_cli.clos import Cloudos
6
6
  from cloudos_cli.import_wf.import_wf import ImportGitlab, ImportGithub
7
7
  from cloudos_cli.queue.queue import Queue
8
+ from cloudos_cli.utils.errors import BadRequestException
8
9
  import json
9
10
  import time
10
11
  import sys
11
- import os
12
- import urllib3
13
12
  from ._version import __version__
14
13
  from cloudos_cli.configure.configure import ConfigurationProfile
14
+ from rich.console import Console
15
+ from rich.table import Table
15
16
  from cloudos_cli.datasets import Datasets
17
+ from cloudos_cli.utils.resources import ssl_selector, format_bytes
18
+ from rich.style import Style
19
+
16
20
 
17
21
  # GLOBAL VARS
18
22
  JOB_COMPLETED = 'completed'
@@ -27,39 +31,6 @@ ABORT_JOB_STATES = ['running', 'initializing']
27
31
  CLOUDOS_URL = 'https://cloudos.lifebit.ai'
28
32
  INIT_PROFILE = 'initialisingProfile'
29
33
 
30
-
31
- def ssl_selector(disable_ssl_verification, ssl_cert):
32
- """Verify value selector.
33
-
34
- This function stablish the value that will be passed to requests.verify
35
- variable.
36
-
37
- Parameters
38
- ----------
39
- disable_ssl_verification : bool
40
- Whether to disable SSL verification.
41
- ssl_cert : string
42
- String indicating the path to the SSL certificate file to use.
43
-
44
- Returns
45
- -------
46
- verify_ssl : [bool | string]
47
- Either a bool or a path string to be passed to requests.verify to control
48
- SSL verification.
49
- """
50
- if disable_ssl_verification:
51
- verify_ssl = False
52
- print('[WARNING] Disabling SSL verification')
53
- urllib3.disable_warnings()
54
- elif ssl_cert is None:
55
- verify_ssl = True
56
- elif os.path.isfile(ssl_cert):
57
- verify_ssl = ssl_cert
58
- else:
59
- raise FileNotFoundError(f"The specified file '{ssl_cert}' was not found")
60
- return verify_ssl
61
-
62
-
63
34
  @click.group()
64
35
  @click.version_option(__version__)
65
36
  @click.pass_context
@@ -89,6 +60,7 @@ def run_cloudos_cli(ctx):
89
60
  'abort': shared_config,
90
61
  'status': shared_config,
91
62
  'list': shared_config,
63
+ 'details': shared_config
92
64
  },
93
65
  'workflow': {
94
66
  'list': shared_config,
@@ -109,7 +81,8 @@ def run_cloudos_cli(ctx):
109
81
  'job': shared_config
110
82
  },
111
83
  'datasets': {
112
- 'ls': shared_config
84
+ 'ls': shared_config,
85
+ 'mv': shared_config
113
86
  }
114
87
  })
115
88
  else:
@@ -130,6 +103,7 @@ def run_cloudos_cli(ctx):
130
103
  'abort': shared_config,
131
104
  'status': shared_config,
132
105
  'list': shared_config,
106
+ 'details': shared_config
133
107
  },
134
108
  'workflow': {
135
109
  'list': shared_config,
@@ -150,7 +124,8 @@ def run_cloudos_cli(ctx):
150
124
  'job': shared_config
151
125
  },
152
126
  'datasets': {
153
- 'ls': shared_config
127
+ 'ls': shared_config,
128
+ 'mv': shared_config
154
129
  }
155
130
  })
156
131
 
@@ -726,6 +701,222 @@ def job_status(ctx,
726
701
  'or repeat the command you just used.')
727
702
 
728
703
 
704
+ @job.command('details')
705
+ @click.option('-k',
706
+ '--apikey',
707
+ help='Your CloudOS API key',
708
+ required=True)
709
+ @click.option('-c',
710
+ '--cloudos-url',
711
+ help=(f'The CloudOS url you are trying to access to. Default={CLOUDOS_URL}.'),
712
+ default=CLOUDOS_URL)
713
+ @click.option('--job-id',
714
+ help='The job id in CloudOS to search for.',
715
+ required=True)
716
+ @click.option('--output-format',
717
+ help='The desired display for the output, either directly in standard output or saved as file. Default=stdout.',
718
+ type=click.Choice(['stdout', 'json'], case_sensitive=False),
719
+ default='stdout')
720
+ @click.option('--output-basename',
721
+ help=('Output file base name to save jobs details. ' +
722
+ 'Default=job_details'),
723
+ default='job_details',
724
+ required=False)
725
+ @click.option('--parameters',
726
+ help=('Whether to generate a ".config" file that can be used as input for --job-config parameter. ' +
727
+ 'It will have the same basename as defined in "--output-basename". '),
728
+ is_flag=True)
729
+ @click.option('--verbose',
730
+ help='Whether to print information messages or not.',
731
+ is_flag=True)
732
+ @click.option('--disable-ssl-verification',
733
+ help=('Disable SSL certificate verification. Please, remember that this option is ' +
734
+ 'not generally recommended for security reasons.'),
735
+ is_flag=True)
736
+ @click.option('--ssl-cert',
737
+ help='Path to your SSL certificate file.')
738
+ @click.option('--profile', help='Profile to use from the config file', default=None)
739
+ @click.pass_context
740
+ def job_details(ctx,
741
+ apikey,
742
+ cloudos_url,
743
+ job_id,
744
+ output_format,
745
+ output_basename,
746
+ parameters,
747
+ verbose,
748
+ disable_ssl_verification,
749
+ ssl_cert,
750
+ profile):
751
+ """Retrieve job details in CloudOS."""
752
+ profile = profile or ctx.default_map['job']['details']['profile']
753
+ # Create a dictionary with required and non-required params
754
+ required_dict = {
755
+ 'apikey': True,
756
+ 'workspace_id': False,
757
+ 'workflow_name': False,
758
+ 'project_name': False
759
+ }
760
+ # determine if the user provided all required parameters
761
+ config_manager = ConfigurationProfile()
762
+ apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
763
+ config_manager.load_profile_and_validate_data(
764
+ ctx,
765
+ INIT_PROFILE,
766
+ CLOUDOS_URL,
767
+ profile=profile,
768
+ required_dict=required_dict,
769
+ apikey=apikey,
770
+ cloudos_url=cloudos_url
771
+ )
772
+ )
773
+
774
+ print('Executing details...')
775
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
776
+ if verbose:
777
+ print('\t...Preparing objects')
778
+ cl = Cloudos(cloudos_url, apikey, None)
779
+ if verbose:
780
+ print('\tThe following Cloudos object was created:')
781
+ print('\t' + str(cl) + '\n')
782
+ print(f'\tSearching for job id: {job_id}')
783
+
784
+ # check if the API gives a 403 error/forbidden error
785
+ try:
786
+ j_details = cl.get_job_status(job_id, verify_ssl)
787
+ except BadRequestException as e:
788
+ if '403' in str(e) or 'Forbidden' in str(e):
789
+ print("[Error] API can only show job details of your own jobs, cannot see other user's job details.")
790
+ sys.exit(1)
791
+ j_details_h = json.loads(j_details.content)
792
+
793
+ # Check if the job details contain parameters
794
+ if j_details_h["parameters"] != []:
795
+ param_kind_map = {
796
+ 'textValue': 'textValue',
797
+ 'arrayFileColumn': 'columnName',
798
+ 'globPattern': 'globPattern',
799
+ 'lustreFileSystem': 'fileSystem',
800
+ }
801
+ # there are different types of parameters, arrayFileColumn, globPattern, lustreFileSystem
802
+ # get first the type of parameter, then the value based on the parameter kind
803
+ concats = []
804
+ for param in j_details_h["parameters"]:
805
+ if param['parameterKind'] == 'dataItem':
806
+ # For dataItem, we need to use specific nested keys
807
+ concats.append(f"{param['prefix']}{param['name']}={param['dataItem']['item']['name']}")
808
+ else:
809
+ # For other parameter kinds, we use the appropriate key from param_kind_map
810
+ concats.append(f"{param['prefix']}{param['name']}={param[param_kind_map[param['parameterKind']]]}")
811
+ concat_string = '\n'.join(concats)
812
+ # If the user requested to save the parameters in a config file
813
+ if parameters:
814
+ # Create a config file with the parameters
815
+ config_filename = f"{output_basename}.config"
816
+ with open(config_filename, 'w') as config_file:
817
+ config_file.write("params {\n")
818
+ for param in j_details_h["parameters"]:
819
+ config_file.write(f"\t{param['name']} = {param['textValue']}\n")
820
+ config_file.write("}\n")
821
+ print(f"\tJob parameters have been saved to '{config_filename}'")
822
+ else:
823
+ concat_string = 'No parameters provided'
824
+ if parameters:
825
+ print("\tNo parameters found in the job details, no config file will be created.")
826
+
827
+ # Determine the execution platform based on jobType
828
+ executors = {
829
+ 'nextflowAWS':'Batch AWS',
830
+ 'nextflowAzure': 'Batch Azure',
831
+ 'nextflowGcp': 'GCP',
832
+ 'nextflowHpc': 'HPC',
833
+ 'nextflowKubernetes': 'Kubernetes',
834
+ 'dockerAWS': 'Batch AWS',
835
+ 'cromwellAWS': 'Batch AWS'
836
+ }
837
+ execution_platform = executors.get(j_details_h["jobType"], "None")
838
+
839
+ # revision
840
+ if j_details_h["jobType"] == "dockerAWS":
841
+ revision = j_details_h["revision"]["digest"]
842
+ else:
843
+ revision = j_details_h["revision"]["commit"]
844
+
845
+ # Output the job details
846
+ if output_format == 'stdout':
847
+ console = Console()
848
+ table = Table(title="Job Details")
849
+
850
+ table.add_column("Field", style="cyan", no_wrap=True)
851
+ table.add_column("Value", style="magenta", overflow="fold")
852
+
853
+ table.add_row("Job Status", str(j_details_h["status"]))
854
+ table.add_row("Parameters", concat_string)
855
+ if j_details_h["jobType"] == "dockerAWS":
856
+ table.add_row("Command", str(j_details_h["command"]))
857
+ table.add_row("Revision", str(revision))
858
+ table.add_row("Nextflow Version", str(j_details_h.get("nextflowVersion", "None")))
859
+ table.add_row("Execution Platform", execution_platform)
860
+ table.add_row("Profile", str(j_details_h.get("profile", "None")))
861
+ table.add_row("Master Instance", str(j_details_h["masterInstance"]["usedInstance"]["type"]))
862
+ if j_details_h["jobType"] == "nextflowAzure":
863
+ try:
864
+ table.add_row("Worker Node", str(j_details_h["azureBatch"]["vmType"]))
865
+ except KeyError:
866
+ table.add_row("Worker Node", "Not Specified")
867
+ table.add_row("Storage", str(j_details_h["storageSizeInGb"]) + " GB")
868
+ if j_details_h["jobType"] != "nextflowAzure":
869
+ try:
870
+ table.add_row("Job Queue ID", str(j_details_h["batch"]["jobQueue"]["name"]))
871
+ table.add_row("Job Queue Name", str(j_details_h["batch"]["jobQueue"]["label"]))
872
+ except KeyError:
873
+ table.add_row("Job Queue", "Master Node")
874
+ table.add_row("Accelerated File Staging", str(j_details_h.get("usesFusionFileSystem", "None")))
875
+ table.add_row("Task Resources", f"{str(j_details_h['resourceRequirements']['cpu'])} CPUs, " +
876
+ f"{str(j_details_h['resourceRequirements']['ram'])} GB RAM")
877
+
878
+ console.print(table)
879
+ else:
880
+ # Create a JSON object with the key-value pairs
881
+ job_details_json = {
882
+ "Job Status": str(j_details_h["status"]),
883
+ "Parameters": ','.join(concat_string.split()),
884
+ "Revision": str(revision),
885
+ "Nextflow Version": str(j_details_h.get("nextflowVersion", "None")),
886
+ "Execution Platform": execution_platform,
887
+ "Profile": str(j_details_h.get("profile", "None")),
888
+ "Master Instance": str(j_details_h["masterInstance"]["usedInstance"]["type"]),
889
+ "Storage": str(j_details_h["storageSizeInGb"]) + " GB",
890
+ "Accelerated File Staging": str(j_details_h.get("usesFusionFileSystem", "None")),
891
+ "Task Resources": f"{str(j_details_h['resourceRequirements']['cpu'])} CPUs, " + \
892
+ f"{str(j_details_h['resourceRequirements']['ram'])} GB RAM"
893
+
894
+ }
895
+
896
+ # Conditionally add the "Command" key if the jobType is "dockerAWS"
897
+ if j_details_h["jobType"] == "dockerAWS":
898
+ job_details_json["Command"] = str(j_details_h["command"])
899
+
900
+ # Conditionally add the "Job Queue" key if the jobType is not "nextflowAzure"
901
+ if j_details_h["jobType"] != "nextflowAzure":
902
+ try:
903
+ job_details_json["Job Queue ID"] = str(j_details_h["batch"]["jobQueue"]["name"])
904
+ job_details_json["Job Queue Name"] = str(j_details_h["batch"]["jobQueue"]["label"])
905
+ except KeyError:
906
+ job_details_json["Job Queue"] = "Master Node"
907
+
908
+ if j_details_h["jobType"] == "nextflowAzure":
909
+ try:
910
+ job_details_json["Worker Node"] = str(j_details_h["azureBatch"]["vmType"])
911
+ except KeyError:
912
+ job_details_json["Worker Node"] = "Not Specified"
913
+
914
+ # Write the JSON object to a file
915
+ with open(f"{output_basename}.json", "w") as json_file:
916
+ json.dump(job_details_json, json_file, indent=4, ensure_ascii=False)
917
+ print(f"\tJob details have been saved to '{output_basename}.json'")
918
+
919
+
729
920
  @job.command('list')
730
921
  @click.option('-k',
731
922
  '--apikey',
@@ -1855,6 +2046,11 @@ def run_bash_job(ctx,
1855
2046
  @click.option('--project-name',
1856
2047
  help='The name of a CloudOS project.')
1857
2048
  @click.option('--profile', help='Profile to use from the config file', default=None)
2049
+ @click.option('--details',
2050
+ help=('When selected, it prints the details of the listed files. ' +
2051
+ 'Details contains "Type", "Owner", "Size", "Last Updated", ' +
2052
+ '"Filepath", "S3 Path".'),
2053
+ is_flag=True)
1858
2054
  @click.pass_context
1859
2055
  def list_files(ctx,
1860
2056
  apikey,
@@ -1864,14 +2060,14 @@ def list_files(ctx,
1864
2060
  ssl_cert,
1865
2061
  project_name,
1866
2062
  profile,
1867
- path):
2063
+ path,
2064
+ details):
1868
2065
  """List contents of a path within a CloudOS workspace dataset."""
1869
2066
 
1870
2067
  # fallback to ctx default if profile not specified
1871
2068
  profile = profile or ctx.default_map['datasets']['list'].get('profile')
1872
2069
 
1873
2070
  config_manager = ConfigurationProfile()
1874
-
1875
2071
  required_dict = {
1876
2072
  'apikey': True,
1877
2073
  'workspace_id': True,
@@ -1879,7 +2075,6 @@ def list_files(ctx,
1879
2075
  'project_name': False
1880
2076
  }
1881
2077
 
1882
- # Unpack profile values first
1883
2078
  apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
1884
2079
  config_manager.load_profile_and_validate_data(
1885
2080
  ctx,
@@ -1912,21 +2107,219 @@ def list_files(ctx,
1912
2107
  result = datasets.list_folder_content(path)
1913
2108
  contents = result.get("contents") or result.get("datasets", [])
1914
2109
  if not contents:
1915
- files = result.get("files", [])
1916
- folders = result.get("folders", [])
1917
- contents = [{"name": f["name"], "isDir": False} for f in files] + \
1918
- [{"name": f["name"], "isDir": True} for f in folders]
2110
+ contents = result.get("files", []) + result.get("folders", [])
2111
+
2112
+ if details:
2113
+ console = Console(width=None) # Avoid terminal width truncation
2114
+
2115
+ table = Table(show_header=True, header_style="bold white")
2116
+ table.add_column("Type", style="cyan", no_wrap=True)
2117
+ table.add_column("Owner", style="white")
2118
+ table.add_column("Size", style="magenta")
2119
+ table.add_column("Last Updated", style="green")
2120
+ table.add_column("Filepath", style="bold", overflow="fold")
2121
+ table.add_column("S3 Path", style="dim", no_wrap=False, overflow="fold", ratio=2)
2122
+
2123
+ for item in contents:
2124
+ is_folder = "folderType" in item or item.get("isDir", False)
2125
+ type_ = "folder" if is_folder else "file"
2126
+
2127
+ user = item.get("user")
2128
+ if isinstance(user, dict):
2129
+ name = user.get("name", "").strip()
2130
+ surname = user.get("surname", "").strip()
2131
+ if name and surname:
2132
+ owner = f"{name} {surname}"
2133
+ elif name:
2134
+ owner = name
2135
+ elif surname:
2136
+ owner = surname
2137
+ else:
2138
+ owner = "-"
2139
+ else:
2140
+ owner = "-"
2141
+
2142
+ raw_size = item.get("sizeInBytes", item.get("size"))
2143
+ size = format_bytes(raw_size) if not is_folder and raw_size is not None else "-"
2144
+
2145
+ updated = item.get("updatedAt") or item.get("lastModified", "-")
2146
+ filepath = item.get("name", "-")
2147
+
2148
+ if is_folder:
2149
+ s3_bucket = item.get("s3BucketName")
2150
+ s3_key = item.get("s3Prefix")
2151
+ s3_path = f"s3://{s3_bucket}/{s3_key}" if s3_bucket and s3_key else "-"
2152
+ else:
2153
+ s3_bucket = item.get("s3BucketName")
2154
+ s3_key = item.get("s3ObjectKey") or item.get("s3Prefix")
2155
+ s3_path = f"s3://{s3_bucket}/{s3_key}" if s3_bucket and s3_key else "-"
2156
+
2157
+ style = Style(color="blue", underline=True) if is_folder else None
2158
+ table.add_row(type_, owner, size, updated, filepath, s3_path, style=style)
2159
+
2160
+ console.print(table)
1919
2161
 
1920
- for item in contents:
1921
- name = item.get("name", "")
1922
- if item.get("isDir"):
1923
- name = click.style(name, fg="blue", underline=True)
1924
- click.echo(name)
2162
+ else:
2163
+ console = Console()
2164
+ for item in contents:
2165
+ name = item.get("name", "")
2166
+ is_folder = item.get("folderType") or item.get("isDir")
2167
+ if is_folder:
2168
+ console.print(f"[blue underline]{name}[/]")
2169
+ else:
2170
+ console.print(name)
1925
2171
 
1926
2172
  except Exception as e:
1927
2173
  click.echo(f"[ERROR] {str(e)}", err=True)
1928
2174
 
1929
2175
 
2176
+ @datasets.command(name="mv")
2177
+ @click.argument("source_path", required=True)
2178
+ @click.argument("destination_path", required=True)
2179
+ @click.option('-k', '--apikey', required=True, help='Your CloudOS API key.')
2180
+ @click.option('-c', '--cloudos-url', default=CLOUDOS_URL, required=False, help='The CloudOS URL.')
2181
+ @click.option('--workspace-id', required=True, help='The CloudOS workspace ID.')
2182
+ @click.option('--project-name', required=True, help='The source project name.')
2183
+ @click.option('--destination-project-name', required=False, help='The destination project name. Defaults to the source project.')
2184
+ @click.option('--disable-ssl-verification', is_flag=True, help='Disable SSL certificate verification.')
2185
+ @click.option('--ssl-cert', help='Path to your SSL certificate file.')
2186
+ @click.option('--profile', default=None, help='Profile to use from the config file.')
2187
+ @click.pass_context
2188
+ def move_files(ctx, source_path, destination_path, apikey, cloudos_url, workspace_id,
2189
+ project_name, destination_project_name,
2190
+ disable_ssl_verification, ssl_cert, profile):
2191
+ """
2192
+ Move a file or folder from a source path to a destination path within or across CloudOS projects.
2193
+
2194
+ SOURCE_PATH [path] : the full path to the file or folder to move. It must be a 'Data' folder path. E.g.: 'Data/folderA/file.txt'\n
2195
+ DESTINATION_PATH [path]: the full path to the destination folder. It must be a 'Data' folder path. E.g.: 'Data/folderB'
2196
+ """
2197
+
2198
+ profile = profile or ctx.default_map['datasets']['move'].get('profile')
2199
+ destination_project_name = destination_project_name or project_name
2200
+
2201
+ # Validate destination constraint
2202
+ if not destination_path.strip("/").startswith("Data/") and destination_path.strip("/") != "Data":
2203
+ click.echo("[ERROR] Destination path must begin with 'Data/' or be 'Data'.", err=True)
2204
+ sys.exit(1)
2205
+ if not source_path.strip("/").startswith("Data/") and source_path.strip("/") != "Data":
2206
+ click.echo("[ERROR] SOURCE_PATH must start with 'Data/' or be 'Data'.", err=True)
2207
+ sys.exit(1)
2208
+ click.echo('Loading configuration profile')
2209
+ # Load configuration profile
2210
+ config_manager = ConfigurationProfile()
2211
+ required_dict = {
2212
+ 'apikey': True,
2213
+ 'workspace_id': True,
2214
+ 'workflow_name': False,
2215
+ 'project_name': True
2216
+ }
2217
+
2218
+ apikey, cloudos_url, workspace_id, workflow_name, repository_platform, execution_platform, project_name = (
2219
+ config_manager.load_profile_and_validate_data(
2220
+ ctx,
2221
+ INIT_PROFILE,
2222
+ CLOUDOS_URL,
2223
+ profile=profile,
2224
+ required_dict=required_dict,
2225
+ apikey=apikey,
2226
+ cloudos_url=cloudos_url,
2227
+ workspace_id=workspace_id,
2228
+ workflow_name=None,
2229
+ repository_platform=None,
2230
+ execution_platform=None,
2231
+ project_name=project_name
2232
+ )
2233
+ )
2234
+
2235
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
2236
+ # Initialize Datasets clients
2237
+ source_client = Datasets(
2238
+ cloudos_url=cloudos_url,
2239
+ apikey=apikey,
2240
+ workspace_id=workspace_id,
2241
+ project_name=project_name,
2242
+ verify=verify_ssl,
2243
+ cromwell_token=None
2244
+ )
2245
+
2246
+ dest_client = Datasets(
2247
+ cloudos_url=cloudos_url,
2248
+ apikey=apikey,
2249
+ workspace_id=workspace_id,
2250
+ project_name=destination_project_name,
2251
+ verify=verify_ssl,
2252
+ cromwell_token=None
2253
+ )
2254
+ click.echo('Checking source path')
2255
+ # === Resolve Source Item ===
2256
+ source_parts = source_path.strip("/").split("/")
2257
+ source_parent_path = "/".join(source_parts[:-1]) if len(source_parts) > 1 else None
2258
+ source_item_name = source_parts[-1]
2259
+
2260
+ try:
2261
+ source_contents = source_client.list_folder_content(source_parent_path)
2262
+ except Exception as e:
2263
+ click.echo(f"[ERROR] Could not resolve source path '{source_path}': {str(e)}", err=True)
2264
+ sys.exit(1)
2265
+
2266
+ found_source = None
2267
+ for collection in ["files", "folders"]:
2268
+ for item in source_contents.get(collection, []):
2269
+ if item.get("name") == source_item_name:
2270
+ found_source = item
2271
+ break
2272
+ if found_source:
2273
+ break
2274
+ if not found_source:
2275
+ click.echo(f"[ERROR] Item '{source_item_name}' not found in '{source_parent_path or '[project root]'}'", err=True)
2276
+ sys.exit(1)
2277
+
2278
+ source_id = found_source["_id"]
2279
+ source_kind = "Folder" if "folderType" in found_source else "File"
2280
+ click.echo("Checking destination path")
2281
+ # === Resolve Destination Folder ===
2282
+ dest_parts = destination_path.strip("/").split("/")
2283
+ dest_folder_name = dest_parts[-1]
2284
+ dest_parent_path = "/".join(dest_parts[:-1]) if len(dest_parts) > 1 else None
2285
+
2286
+ try:
2287
+ dest_contents = dest_client.list_folder_content(dest_parent_path)
2288
+ match = next((f for f in dest_contents.get("folders", []) if f.get("name") == dest_folder_name), None)
2289
+ if not match:
2290
+ raise ValueError(f"Could not resolve destination folder '{destination_path}'")
2291
+
2292
+ target_id = match["_id"]
2293
+ folder_type = match.get("folderType")
2294
+ # Normalize kind: top-level datasets are kind=Dataset, all other folders are kind=Folder
2295
+ if folder_type in ("VirtualFolder", "S3Folder", "Folder"):
2296
+ target_kind = "Folder"
2297
+ elif isinstance(folder_type, bool) and folder_type: # legacy dataset structure
2298
+ target_kind = "Dataset"
2299
+ else:
2300
+ raise ValueError(f"Unrecognized folderType '{folder_type}' for destination '{destination_path}'")
2301
+
2302
+ except Exception as e:
2303
+ click.echo(f"[ERROR] Could not resolve destination path '{destination_path}': {str(e)}", err=True)
2304
+ sys.exit(1)
2305
+ click.echo(f"Moving {source_kind} '{source_item_name}' to '{destination_path}' in project '{destination_project_name} ...")
2306
+ # === Perform Move ===
2307
+ try:
2308
+ response = source_client.move_files_and_folders(
2309
+ source_id=source_id,
2310
+ source_kind=source_kind,
2311
+ target_id=target_id,
2312
+ target_kind=target_kind
2313
+ )
2314
+ if response.ok:
2315
+ click.secho(f"[SUCCESS] {source_kind} '{source_item_name}' moved to '{destination_path}' in project '{destination_project_name}'.", fg="green", bold=True)
2316
+ else:
2317
+ click.echo(f"[ERROR] Move failed: {response.status_code} - {response.text}", err=True)
2318
+ sys.exit(1)
2319
+ except Exception as e:
2320
+ click.echo(f"[ERROR] Move operation failed: {str(e)}", err=True)
2321
+ sys.exit(1)
2322
+
2323
+
1930
2324
  if __name__ == "__main__":
1931
2325
  run_cloudos_cli()
1932
-
@@ -0,0 +1 @@
1
+ __version__ = '2.29.0'
@@ -5,8 +5,8 @@ This is the main class for file explorer (datasets).
5
5
  from dataclasses import dataclass
6
6
  from typing import Union
7
7
  from cloudos_cli.clos import Cloudos
8
- from cloudos_cli.utils.requests import retry_requests_get
9
-
8
+ from cloudos_cli.utils.requests import retry_requests_get, retry_requests_put
9
+ import json
10
10
 
11
11
  @dataclass
12
12
  class Datasets(Cloudos):
@@ -150,7 +150,16 @@ class Datasets(Cloudos):
150
150
  self.project_id,
151
151
  self.workspace_id),
152
152
  headers=headers, verify=self.verify)
153
- return r.json()
153
+ raw = r.json()
154
+ datasets = raw.get("datasets", [])
155
+ # Normalize response
156
+ for item in datasets:
157
+ item["folderType"] = True
158
+ response ={
159
+ "folders": datasets,
160
+ "files": []
161
+ }
162
+ return response
154
163
 
155
164
  def list_datasets_content(self, folder_name):
156
165
  """Uses
@@ -177,7 +186,7 @@ class Datasets(Cloudos):
177
186
  if folder_name == 'AnalysesResults':
178
187
  folder_name = 'Analyses Results'
179
188
 
180
- for folder in pro_fol.get("datasets", []):
189
+ for folder in pro_fol.get("folders", []):
181
190
  if folder['name'] == folder_name:
182
191
  folder_id = folder['_id']
183
192
  if not folder_id:
@@ -187,6 +196,7 @@ class Datasets(Cloudos):
187
196
  self.workspace_id),
188
197
  headers=headers, verify=self.verify)
189
198
  return r.json()
199
+
190
200
  def list_s3_folder_content(self, s3_bucket_name, s3_relative_path):
191
201
  """Uses
192
202
  ----------
@@ -256,6 +266,7 @@ class Datasets(Cloudos):
256
266
  self.workspace_id),
257
267
  headers=headers, verify=self.verify)
258
268
  return r.json()
269
+
259
270
  def list_folder_content(self, path=None):
260
271
  """
261
272
  Wrapper to list contents of a CloudOS folder.
@@ -319,4 +330,40 @@ class Datasets(Cloudos):
319
330
  if not found:
320
331
  raise ValueError(f"Folder '{job_name}' not found under dataset '{dataset_name}'")
321
332
 
322
- return folder_content
333
+ return folder_content
334
+
335
+ def move_files_and_folders(self, source_id: str, source_kind: str, target_id: str, target_kind: str):
336
+ """
337
+ Move a file to another dataset in CloudOS.
338
+
339
+ Parameters
340
+ ----------
341
+ file_id : str
342
+ The ID of the file to move.
343
+
344
+ target_dataset_id : str
345
+ The ID of the target dataset to move the file into.
346
+
347
+ Returns
348
+ -------
349
+ response : requests.Response
350
+ The response object from the CloudOS API.
351
+ """
352
+ url = f"{self.cloudos_url}/api/v1/dataItems/move?teamId={self.workspace_id}"
353
+ headers = {
354
+ "accept": "application/json",
355
+ "content-type": "application/json",
356
+ "ApiKey": self.apikey
357
+ }
358
+ payload = {
359
+ "dataItemToMove": {
360
+ "kind": source_kind,
361
+ "item": source_id
362
+ },
363
+ "toDataItemParent": {
364
+ "kind": target_kind,
365
+ "item": target_id
366
+ }
367
+ }
368
+ response = retry_requests_put(url, headers=headers, data=json.dumps(payload), verify=self.verify)
369
+ return response
@@ -4,6 +4,6 @@ Utility functions and classes to use across the package.
4
4
 
5
5
  from .errors import BadRequestException, TimeOutException
6
6
  from .requests import retry_requests_get, retry_requests_post
7
+ from .resources import format_bytes, ssl_selector
7
8
 
8
-
9
- __all__ = ['errors', 'requests']
9
+ __all__ = ['errors', 'requests', 'resources']
@@ -0,0 +1,46 @@
1
+ import os
2
+ import urllib3
3
+
4
+ def format_bytes(size):
5
+ """Convert bytes to human-readable format (e.g., 1.2 MB)."""
6
+ if size is None:
7
+ return "-"
8
+ power = 1024
9
+ n = 0
10
+ labels = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
11
+ while size >= power and n < len(labels) - 1:
12
+ size /= power
13
+ n += 1
14
+ return f"{size:.1f} {labels[n]}"
15
+
16
+
17
+ def ssl_selector(disable_ssl_verification, ssl_cert):
18
+ """Verify value selector.
19
+
20
+ This function stablish the value that will be passed to requests.verify
21
+ variable.
22
+
23
+ Parameters
24
+ ----------
25
+ disable_ssl_verification : bool
26
+ Whether to disable SSL verification.
27
+ ssl_cert : string
28
+ String indicating the path to the SSL certificate file to use.
29
+
30
+ Returns
31
+ -------
32
+ verify_ssl : [bool | string]
33
+ Either a bool or a path string to be passed to requests.verify to control
34
+ SSL verification.
35
+ """
36
+ if disable_ssl_verification:
37
+ verify_ssl = False
38
+ print('[WARNING] Disabling SSL verification')
39
+ urllib3.disable_warnings()
40
+ elif ssl_cert is None:
41
+ verify_ssl = True
42
+ elif os.path.isfile(ssl_cert):
43
+ verify_ssl = ssl_cert
44
+ else:
45
+ raise FileNotFoundError(f"The specified file '{ssl_cert}' was not found")
46
+ return verify_ssl
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.26.1
3
+ Version: 2.29.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -535,6 +535,69 @@ Executing status...
535
535
  To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
536
536
  ```
537
537
 
538
+ #### Check job details
539
+
540
+ To check the details of a submitted job, the subcommand `details` of `job` can be used.
541
+
542
+ For example, with explicit variable for required parameters:
543
+
544
+ ```bash
545
+ cloudos job details \
546
+ --apikey $MY_API_KEY \
547
+ --job-id 62c83a1191fe06013b7ef355
548
+ ```
549
+
550
+ Or with a defined profile:
551
+
552
+ ```bash
553
+ cloudos job details \
554
+ --profile job-details \
555
+ --job-id 62c83a1191fe06013b7ef355
556
+ ```
557
+
558
+ The expected output should be something similar to when using the defaults and the details are displayed in the standard output console:
559
+
560
+ ```console
561
+ Executing details...
562
+ Job Details
563
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
564
+ ┃ Field ┃ Value ┃
565
+ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
566
+ │ Parameters │ -test=value │
567
+ │ │ --gaq=test │
568
+ │ │ cryo=yes │
569
+ │ Command │ echo 'test' > new_file.txt │
570
+ │ Revision │ sha256:6015f66923d7afbc53558d7ccffd325d43b4e249f41a6e93eef074c9505d2233 │
571
+ │ Nextflow Version │ None │
572
+ │ Execution Platform │ Batch AWS │
573
+ │ Profile │ None │
574
+ │ Master Instance │ c5.xlarge │
575
+ │ Storage │ 500 │
576
+ │ Job Queue │ nextflow-job-queue-5c6d3e9bd954e800b23f8c62-feee │
577
+ │ Accelerated File Staging │ None │
578
+ │ Task Resources │ 1 CPUs, 4 GB RAM │
579
+ └──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
580
+ ```
581
+
582
+ To change this behaviour and save the details into a local JSON, the parameter `--output-format` needs to be set as `--output-format=json`.
583
+
584
+ By default, all details are saved in a file with the basename as `job_details`, for example `job_details.json` or `job_details.config.`. This can be changed with the parameter `--output-basename=new_filename`.
585
+
586
+ The `details` subcommand, can also take `--parameters` as an argument flag, which will create a new file `*.config` that holds all parameters as a Nexflow configuration file, example:
587
+
588
+ ```console
589
+ params {
590
+ parameter_one = value_one
591
+ parameter_two = value_two
592
+ parameter_three = value_three
593
+ }
594
+ ```
595
+
596
+ This file can later be used when running a job with `cloudos job run --job-config job_details.config ...`.
597
+
598
+ > [!NOTE]
599
+ > Job details can only be retrieved for a single user, cannot see other user's job details.
600
+
538
601
  #### Get a list of your jobs from a CloudOS workspace
539
602
 
540
603
  You can get a summary of your last 30 submitted jobs (or your selected number of last jobs using `--last-n-jobs n`
@@ -758,9 +821,49 @@ cloudos datasets ls <path> \
758
821
  --workspace-id $WORKSPACE_ID \
759
822
  --project-name $PROJEC_NAME
760
823
  ```
824
+
825
+
761
826
  The output of this command is a list of files and folders present in the specified project.
762
827
  If the `<path>` is left empty, the command will return the list of folders present in the selected project.
763
828
 
829
+ If you require more information on the files and folder listed, you can use the `--details` flag that will output a table containing the following columns:
830
+ - Type (folder or file)
831
+ - Owner
832
+ - Size in human readable format
833
+ - Last updated
834
+ - Filepath (the file or folder name)
835
+ - S3 Path
836
+
837
+ ##### Moving files
838
+
839
+ Files and folders can be moved **from** `Data` or any of its subfolders (i.e `Data`, `Data/folder/file.txt`) **to** `Data` or any of its subfolders programmatically.
840
+
841
+ 1. The move can happen **within the same project** running the following command:
842
+ ```
843
+ cloudos datasets mv <souce_path> <destination_path> --profile <profile name>
844
+ ```
845
+ where the source project as well as the destination one is the one defined in the profile.
846
+
847
+ 2. The move can also happen **across different projects** within the same workspace by running the following command
848
+ ```
849
+ cloudos datasets mv <source_path> <destiantion_path> --profile <profile_name> --destination-project-name <project_name>
850
+ ```
851
+ In this case, only the source project is the one specified in the profile.
852
+
853
+ Any of the `source_path` must be a full path, starting from the `Data` datasets and its folder; any `destination_path` must be a path starting with `Data` and finishing with the folder where to move the file/folder. An example of such command is:
854
+
855
+ ```
856
+ cloudos datasets mv Data/results/my_plot.png Data/plots
857
+ ```
858
+
859
+ Please, note that in the above example a preconfigured profile has been used. If no profile is provided and there is no default profile, the user will need to also provide the following flags
860
+ ```bash
861
+ --cloudos-url $CLOUDOS \
862
+ --apikey $MY_API_KEY \
863
+ --workspace-id $WORKSPACE_ID \
864
+ --project-name $PROJEC_NAME
865
+ ```
866
+
764
867
  ### WDL pipeline support
765
868
 
766
869
  #### Cromwell server managing
@@ -24,5 +24,6 @@ cloudos_cli/queue/queue.py
24
24
  cloudos_cli/utils/__init__.py
25
25
  cloudos_cli/utils/errors.py
26
26
  cloudos_cli/utils/requests.py
27
+ cloudos_cli/utils/resources.py
27
28
  tests/__init__.py
28
29
  tests/functions_for_pytest.py
@@ -1 +0,0 @@
1
- __version__ = '2.26.1'
File without changes
File without changes
File without changes