cloudos-cli 2.53.0__tar.gz → 2.56.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (41) hide show
  1. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/PKG-INFO +35 -7
  2. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/README.md +34 -6
  3. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/__main__.py +186 -25
  4. cloudos_cli-2.56.0/cloudos_cli/_version.py +1 -0
  5. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/clos.py +120 -1
  6. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/jobs/job.py +45 -9
  7. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/errors.py +18 -0
  8. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/PKG-INFO +35 -7
  9. cloudos_cli-2.53.0/cloudos_cli/_version.py +0 -1
  10. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/LICENSE +0 -0
  11. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/__init__.py +0 -0
  12. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/configure/__init__.py +0 -0
  13. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/configure/configure.py +0 -0
  14. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/datasets/__init__.py +0 -0
  15. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/datasets/datasets.py +0 -0
  16. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/import_wf/__init__.py +0 -0
  17. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/import_wf/import_wf.py +0 -0
  18. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/jobs/__init__.py +0 -0
  19. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/link/__init__.py +0 -0
  20. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/link/link.py +0 -0
  21. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/procurement/__init__.py +0 -0
  22. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/procurement/images.py +0 -0
  23. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/queue/__init__.py +0 -0
  24. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/queue/queue.py +0 -0
  25. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/__init__.py +0 -0
  26. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/array_job.py +0 -0
  27. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/cloud.py +0 -0
  28. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/details.py +0 -0
  29. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/last_wf.py +0 -0
  30. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/requests.py +0 -0
  31. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli/utils/resources.py +0 -0
  32. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/SOURCES.txt +0 -0
  33. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/dependency_links.txt +0 -0
  34. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/entry_points.txt +0 -0
  35. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/requires.txt +0 -0
  36. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/cloudos_cli.egg-info/top_level.txt +0 -0
  37. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/setup.cfg +0 -0
  38. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/setup.py +0 -0
  39. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/tests/__init__.py +0 -0
  40. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/tests/functions_for_pytest.py +0 -0
  41. {cloudos_cli-2.53.0 → cloudos_cli-2.56.0}/tests/test_cli_project_create.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.53.0
3
+ Version: 2.56.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -597,6 +597,34 @@ Executing results...
597
597
  results: s3://path/to/location/of/results/results/
598
598
  ```
599
599
 
600
+ #### Query working directory of job
601
+
602
+ To get the working directory of a job submitted to CloudOS:
603
+
604
+ ```shell
605
+ cloudos job workdir \
606
+ --apikey $MY_API_KEY \
607
+ --cloudos-url $CLOUDOS \
608
+ --job-id 62c83a1191fe06013b7ef355
609
+ ```
610
+
611
+ Or with a defined profile:
612
+
613
+ ```shell
614
+ cloudos job workdir \
615
+ --profile profile-name \
616
+ --job-id 62c83a1191fe06013b7ef355
617
+ ```
618
+
619
+ The output should be something similar to:
620
+
621
+ ```console
622
+ CloudOS job functionality: run, check and abort jobs in CloudOS.
623
+
624
+ Finding working directory path...
625
+ Working directory for job 68747bac9e7fe38ec6e022ad: az://123456789000.blob.core.windows.net/cloudos-987652349087/projects/455654676/jobs/54678856765/work
626
+ ```
627
+
600
628
  #### Abort single or multiple jobs from CloudOS
601
629
 
602
630
  Aborts jobs in the CloudOS workspace that are either running or initialising. It can be used with one or more job IDs provided as a comma separated string using the `--job-ids` parameter.
@@ -615,20 +643,20 @@ Aborting jobs...
615
643
  ```
616
644
 
617
645
 
618
- #### Clone a job with optional parameter overrides
646
+ #### Clone/resume a job with optional parameter overrides
619
647
 
620
- The `clone` command allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
648
+ The `clone` and `resume` commands allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
621
649
 
622
650
  Basic usage:
623
651
  ```console
624
- cloudos job clone \
652
+ cloudos job clone/resume \
625
653
  --profile MY_PROFILE
626
654
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7"
627
655
  ```
628
656
 
629
- Clone with parameter overrides:
657
+ Clone/resume with parameter overrides:
630
658
  ```console
631
- cloudos job clone \
659
+ cloudos job clone/resume \
632
660
  --profile MY_PROFILE
633
661
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7" \
634
662
  --job-queue "high-priority-queue" \
@@ -649,7 +677,7 @@ Available override options:
649
677
  - `--job-queue`: Specify a different job queue
650
678
  - `--cost-limit`: Set a new cost limit (use -1 for no limit)
651
679
  - `--instance-type`: Change the master instance type
652
- - `--job-name`: Assign a custom name to the cloned job
680
+ - `--job-name`: Assign a custom name to the cloned/resumed job
653
681
  - `--nextflow-version`: Use a different Nextflow version
654
682
  - `--git-branch`: Switch to a different git branch
655
683
  - `--nextflow-profile`: Change the Nextflow profile
@@ -562,6 +562,34 @@ Executing results...
562
562
  results: s3://path/to/location/of/results/results/
563
563
  ```
564
564
 
565
+ #### Query working directory of job
566
+
567
+ To get the working directory of a job submitted to CloudOS:
568
+
569
+ ```shell
570
+ cloudos job workdir \
571
+ --apikey $MY_API_KEY \
572
+ --cloudos-url $CLOUDOS \
573
+ --job-id 62c83a1191fe06013b7ef355
574
+ ```
575
+
576
+ Or with a defined profile:
577
+
578
+ ```shell
579
+ cloudos job workdir \
580
+ --profile profile-name \
581
+ --job-id 62c83a1191fe06013b7ef355
582
+ ```
583
+
584
+ The output should be something similar to:
585
+
586
+ ```console
587
+ CloudOS job functionality: run, check and abort jobs in CloudOS.
588
+
589
+ Finding working directory path...
590
+ Working directory for job 68747bac9e7fe38ec6e022ad: az://123456789000.blob.core.windows.net/cloudos-987652349087/projects/455654676/jobs/54678856765/work
591
+ ```
592
+
565
593
  #### Abort single or multiple jobs from CloudOS
566
594
 
567
595
  Aborts jobs in the CloudOS workspace that are either running or initialising. It can be used with one or more job IDs provided as a comma separated string using the `--job-ids` parameter.
@@ -580,20 +608,20 @@ Aborting jobs...
580
608
  ```
581
609
 
582
610
 
583
- #### Clone a job with optional parameter overrides
611
+ #### Clone/resume a job with optional parameter overrides
584
612
 
585
- The `clone` command allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
613
+ The `clone` and `resume` commands allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
586
614
 
587
615
  Basic usage:
588
616
  ```console
589
- cloudos job clone \
617
+ cloudos job clone/resume \
590
618
  --profile MY_PROFILE
591
619
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7"
592
620
  ```
593
621
 
594
- Clone with parameter overrides:
622
+ Clone/resume with parameter overrides:
595
623
  ```console
596
- cloudos job clone \
624
+ cloudos job clone/resume \
597
625
  --profile MY_PROFILE
598
626
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7" \
599
627
  --job-queue "high-priority-queue" \
@@ -614,7 +642,7 @@ Available override options:
614
642
  - `--job-queue`: Specify a different job queue
615
643
  - `--cost-limit`: Set a new cost limit (use -1 for no limit)
616
644
  - `--instance-type`: Change the master instance type
617
- - `--job-name`: Assign a custom name to the cloned job
645
+ - `--job-name`: Assign a custom name to the cloned/resumed job
618
646
  - `--nextflow-version`: Use a different Nextflow version
619
647
  - `--git-branch`: Switch to a different git branch
620
648
  - `--nextflow-profile`: Change the Nextflow profile
@@ -100,9 +100,11 @@ def run_cloudos_cli(ctx, debug):
100
100
  'status': shared_config,
101
101
  'list': shared_config,
102
102
  'logs': shared_config,
103
+ 'workdir': shared_config,
103
104
  'results': shared_config,
104
105
  'details': shared_config,
105
- 'clone': shared_config
106
+ 'clone': shared_config,
107
+ 'resume': shared_config
106
108
  },
107
109
  'workflow': {
108
110
  'list': shared_config,
@@ -162,9 +164,11 @@ def run_cloudos_cli(ctx, debug):
162
164
  'status': shared_config,
163
165
  'list': shared_config,
164
166
  'logs': shared_config,
167
+ 'workdir': shared_config,
165
168
  'results': shared_config,
166
169
  'details': shared_config,
167
- 'clone': shared_config
170
+ 'clone': shared_config,
171
+ 'resume': shared_config
168
172
  },
169
173
  'workflow': {
170
174
  'list': shared_config,
@@ -207,7 +211,7 @@ def run_cloudos_cli(ctx, debug):
207
211
 
208
212
  @run_cloudos_cli.group()
209
213
  def job():
210
- """CloudOS job functionality: run, check and abort jobs in CloudOS."""
214
+ """CloudOS job functionality: run, clone, resume, check and abort jobs in CloudOS."""
211
215
  print(job.__doc__ + '\n')
212
216
 
213
217
 
@@ -803,6 +807,83 @@ def job_status(ctx,
803
807
  'or repeat the command you just used.')
804
808
 
805
809
 
810
+ @job.command('workdir')
811
+ @click.option('-k',
812
+ '--apikey',
813
+ help='Your CloudOS API key',
814
+ required=True)
815
+ @click.option('-c',
816
+ '--cloudos-url',
817
+ help=(f'The CloudOS url you are trying to access to. Default={CLOUDOS_URL}.'),
818
+ default=CLOUDOS_URL,
819
+ required=True)
820
+ @click.option('--workspace-id',
821
+ help='The specific CloudOS workspace id.',
822
+ required=True)
823
+ @click.option('--job-id',
824
+ help='The job id in CloudOS to search for.',
825
+ required=True)
826
+ @click.option('--verbose',
827
+ help='Whether to print information messages or not.',
828
+ is_flag=True)
829
+ @click.option('--disable-ssl-verification',
830
+ help=('Disable SSL certificate verification. Please, remember that this option is ' +
831
+ 'not generally recommended for security reasons.'),
832
+ is_flag=True)
833
+ @click.option('--ssl-cert',
834
+ help='Path to your SSL certificate file.')
835
+ @click.option('--profile', help='Profile to use from the config file', default=None)
836
+ @click.pass_context
837
+ def job_workdir(ctx,
838
+ apikey,
839
+ cloudos_url,
840
+ workspace_id,
841
+ job_id,
842
+ verbose,
843
+ disable_ssl_verification,
844
+ ssl_cert,
845
+ profile):
846
+ """Get the path to the working directory of a specified job."""
847
+ profile = profile or ctx.default_map['job']['workdir']['profile']
848
+ # Create a dictionary with required and non-required params
849
+ required_dict = {
850
+ 'apikey': True,
851
+ 'workspace_id': True,
852
+ 'workflow_name': False,
853
+ 'project_name': False,
854
+ 'procurement_id': False
855
+ }
856
+ # determine if the user provided all required parameters
857
+ config_manager = ConfigurationProfile()
858
+ user_options = (
859
+ config_manager.load_profile_and_validate_data(
860
+ ctx,
861
+ INIT_PROFILE,
862
+ CLOUDOS_URL,
863
+ profile=profile,
864
+ required_dict=required_dict,
865
+ apikey=apikey,
866
+ cloudos_url=cloudos_url,
867
+ workspace_id=workspace_id
868
+ )
869
+ )
870
+ apikey = user_options['apikey']
871
+ cloudos_url = user_options['cloudos_url']
872
+ workspace_id = user_options['workspace_id']
873
+
874
+ print('Finding working directory path...')
875
+ verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
876
+ if verbose:
877
+ print('\t...Preparing objects')
878
+ cl = Cloudos(cloudos_url, apikey, None)
879
+ if verbose:
880
+ print('\tThe following Cloudos object was created:')
881
+ print('\t' + str(cl) + '\n')
882
+ print(f'\tSearching for job id: {job_id}')
883
+ workdir = cl.get_job_workdir(job_id, workspace_id, verify_ssl)
884
+ print(f"Working directory for job {job_id}: {workdir}")
885
+
886
+
806
887
  @job.command('logs')
807
888
  @click.option('-k',
808
889
  '--apikey',
@@ -1455,8 +1536,7 @@ def abort_jobs(ctx,
1455
1536
  cl.abort_job(job, workspace_id, verify_ssl)
1456
1537
  print(f"\tJob '{job}' aborted successfully.")
1457
1538
 
1458
-
1459
- @job.command('clone')
1539
+ @click.command()
1460
1540
  @click.option('-k',
1461
1541
  '--apikey',
1462
1542
  help='Your CloudOS API key',
@@ -1526,7 +1606,7 @@ def abort_jobs(ctx,
1526
1606
  help='Profile to use from the config file',
1527
1607
  default=None)
1528
1608
  @click.pass_context
1529
- def clone_job(ctx,
1609
+ def clone_resume(ctx,
1530
1610
  apikey,
1531
1611
  cloudos_url,
1532
1612
  workspace_id,
@@ -1547,8 +1627,13 @@ def clone_job(ctx,
1547
1627
  disable_ssl_verification,
1548
1628
  ssl_cert,
1549
1629
  profile):
1550
- """Clone an existing job with optional parameter overrides."""
1551
- profile = profile or ctx.default_map['job']['clone']['profile']
1630
+ if ctx.info_name == "clone":
1631
+ mode, action = "clone", "cloning"
1632
+ elif ctx.info_name == "resume":
1633
+ mode, action = "resume", "resuming"
1634
+
1635
+ f"""{mode.capitalize()} an existing job with optional parameter overrides."""
1636
+ profile = profile or ctx.default_map['job'][mode]['profile']
1552
1637
 
1553
1638
  # Create a dictionary with required and non-required params
1554
1639
  required_dict = {
@@ -1580,7 +1665,7 @@ def clone_job(ctx,
1580
1665
 
1581
1666
  verify_ssl = ssl_selector(disable_ssl_verification, ssl_cert)
1582
1667
 
1583
- print('Cloning job...')
1668
+ print(f'{action.capitalize()} job...')
1584
1669
  if verbose:
1585
1670
  print('\t...Preparing objects')
1586
1671
 
@@ -1591,11 +1676,11 @@ def clone_job(ctx,
1591
1676
  if verbose:
1592
1677
  print('\tThe following Job object was created:')
1593
1678
  print('\t' + str(job_obj) + '\n')
1594
- print(f'\tCloning job {job_id} in workspace: {workspace_id}')
1679
+ print(f'\t{action.capitalize()} job {job_id} in workspace: {workspace_id}')
1595
1680
 
1596
1681
  try:
1597
- # Clone the job with provided overrides
1598
- cloned_job_id = job_obj.clone_job(
1682
+ # Clone/resume the job with provided overrides
1683
+ cloned_resumed_job_id = job_obj.clone_or_resume_job(
1599
1684
  source_job_id=job_id,
1600
1685
  queue_name=job_queue,
1601
1686
  cost_limit=cost_limit,
@@ -1610,21 +1695,25 @@ def clone_job(ctx,
1610
1695
  # only when explicitly setting --project-name will be overridden, else using the original project
1611
1696
  project_name=project_name if ctx.get_parameter_source("project_name") == click.core.ParameterSource.COMMANDLINE else None,
1612
1697
  parameters=list(parameter) if parameter else None,
1613
- verify=verify_ssl
1698
+ verify=verify_ssl,
1699
+ mode=mode
1614
1700
  )
1615
1701
  if verbose:
1616
- print(f'\tCloned job ID: {cloned_job_id}')
1702
+ print(f'\t{mode.capitalize()}d job ID: {cloned_resumed_job_id}')
1617
1703
 
1618
- print(f"Job successfully cloned. New job ID: {cloned_job_id}")
1704
+ print(f"Job successfully {mode}d. New job ID: {cloned_resumed_job_id}")
1619
1705
 
1620
1706
  except BadRequestException as e:
1621
1707
  if verbose:
1622
1708
  print(f'\tError details: {e}')
1623
- raise ValueError(f"Failed to clone job: {e}")
1709
+ raise ValueError(f"Failed to {mode} job: {e}")
1624
1710
  except Exception as e:
1625
1711
  if verbose:
1626
1712
  print(f'\tError details: {e}')
1627
- raise ValueError(f"An error occurred while cloning the job: {e}")
1713
+ raise ValueError(f"An error occurred while {action} the job: {e}")
1714
+ # Register the same function under two names
1715
+ job.add_command(clone_resume, "clone")
1716
+ job.add_command(clone_resume, "resume")
1628
1717
 
1629
1718
 
1630
1719
  @workflow.command('list')
@@ -3035,7 +3124,7 @@ def run_bash_array_job(ctx,
3035
3124
  @click.option('--details',
3036
3125
  help=('When selected, it prints the details of the listed files. ' +
3037
3126
  'Details contains "Type", "Owner", "Size", "Last Updated", ' +
3038
- '"File Name", "Storage Path".'),
3127
+ '"Virtual Name", "Storage Path".'),
3039
3128
  is_flag=True)
3040
3129
  @click.pass_context
3041
3130
  def list_files(ctx,
@@ -3104,7 +3193,7 @@ def list_files(ctx,
3104
3193
  table.add_column("Owner", style="white")
3105
3194
  table.add_column("Size", style="magenta")
3106
3195
  table.add_column("Last Updated", style="green")
3107
- table.add_column("File Name", style="bold", overflow="fold")
3196
+ table.add_column("Virtual Name", style="bold", overflow="fold")
3108
3197
  table.add_column("Storage Path", style="dim", no_wrap=False, overflow="fold", ratio=2)
3109
3198
 
3110
3199
  for item in contents:
@@ -3295,7 +3384,7 @@ def move_files(ctx, source_path, destination_path, apikey, cloudos_url, workspac
3295
3384
  if folder_type in ("VirtualFolder", "Folder"):
3296
3385
  target_kind = "Folder"
3297
3386
  elif folder_type=="S3Folder":
3298
- click.echo(f"[ERROR] Item '{source_item_name}' could not be moved to '{destination_path}' as the destination folder is not modifiable.",
3387
+ click.echo(f"[ERROR] Unable to move item '{source_item_name}' to '{destination_path}'. The destination is an S3 folder, and only virtual folders can be selected as valid move destinations.",
3299
3388
  err=True)
3300
3389
  sys.exit(1)
3301
3390
  elif isinstance(folder_type, bool) and folder_type: # legacy dataset structure
@@ -3785,20 +3874,20 @@ def rm_item(ctx, target_path, apikey, cloudos_url,
3785
3874
  click.echo(f"[ERROR] Item '{item_name}' could not be removed as the parent folder is not modifiable.",
3786
3875
  err=True)
3787
3876
  sys.exit(1)
3788
- click.echo(f"Deleting {kind} '{item_name}' from '{parent_path or '[root]'}'...")
3877
+ click.echo(f"Removing {kind} '{item_name}' from '{parent_path or '[root]'}'...")
3789
3878
  try:
3790
3879
  response = client.delete_item(item_id=item_id, kind=kind)
3791
3880
  if response.ok:
3792
3881
  click.secho(
3793
- f"[SUCCESS] {kind} '{item_name}' was deleted from '{parent_path or '[root]'}'.",
3882
+ f"[SUCCESS] {kind} '{item_name}' was removed from '{parent_path or '[root]'}'.",
3794
3883
  fg="green", bold=True
3795
3884
  )
3796
3885
  click.secho("This item will still be available on your Cloud Provider.", fg="yellow")
3797
3886
  else:
3798
- click.echo(f"[ERROR] Deletion failed: {response.status_code} - {response.text}", err=True)
3887
+ click.echo(f"[ERROR] Removal failed: {response.status_code} - {response.text}", err=True)
3799
3888
  sys.exit(1)
3800
3889
  except Exception as e:
3801
- click.echo(f"[ERROR] Delete operation failed: {str(e)}", err=True)
3890
+ click.echo(f"[ERROR] Remove operation failed: {str(e)}", err=True)
3802
3891
  sys.exit(1)
3803
3892
 
3804
3893
 
@@ -3872,7 +3961,79 @@ def link(ctx, path, apikey, cloudos_url, project_name, workspace_id, session_id,
3872
3961
  project_name=project_name,
3873
3962
  verify=verify_ssl
3874
3963
  )
3875
- link_p.link_folder(path, session_id)
3964
+
3965
+ # Minimal folder validation and improved error messages
3966
+ is_s3 = path.startswith("s3://")
3967
+ is_folder = True
3968
+
3969
+ if is_s3:
3970
+ # S3 path validation - use heuristics to determine if it's likely a folder
3971
+ try:
3972
+ # If path ends with '/', it's likely a folder
3973
+ if path.endswith('/'):
3974
+ is_folder = True
3975
+ else:
3976
+ # Check the last part of the path
3977
+ path_parts = path.rstrip("/").split("/")
3978
+ if path_parts:
3979
+ last_part = path_parts[-1]
3980
+ # If the last part has no dot, it's likely a folder
3981
+ if '.' not in last_part:
3982
+ is_folder = True
3983
+ else:
3984
+ # If it has a dot, it might be a file - set to None for warning
3985
+ is_folder = None
3986
+ else:
3987
+ # Empty path parts, set to None for uncertainty
3988
+ is_folder = None
3989
+ except Exception:
3990
+ # If we can't parse the S3 path, set to None for uncertainty
3991
+ is_folder = None
3992
+ else:
3993
+ # File Explorer path validation (existing logic)
3994
+ try:
3995
+ datasets = Datasets(
3996
+ cloudos_url=cloudos_url,
3997
+ apikey=apikey,
3998
+ workspace_id=workspace_id,
3999
+ project_name=project_name,
4000
+ verify=verify_ssl,
4001
+ cromwell_token=None
4002
+ )
4003
+ parts = path.strip("/").split("/")
4004
+ parent_path = "/".join(parts[:-1]) if len(parts) > 1 else ""
4005
+ item_name = parts[-1]
4006
+ contents = datasets.list_folder_content(parent_path)
4007
+ found = None
4008
+ for item in contents.get("folders", []):
4009
+ if item.get("name") == item_name:
4010
+ found = item
4011
+ break
4012
+ if not found:
4013
+ for item in contents.get("files", []):
4014
+ if item.get("name") == item_name:
4015
+ found = item
4016
+ break
4017
+ if found and ("folderType" not in found):
4018
+ is_folder = False
4019
+ except Exception:
4020
+ is_folder = None
4021
+
4022
+ if is_folder is False:
4023
+ if is_s3:
4024
+ click.echo("[ERROR] The S3 path appears to point to a file, not a folder. You can only link folders. Please link the parent folder instead.", err=True)
4025
+ else:
4026
+ click.echo("[ERROR] Linking is only supported for folders, not individual files. Please link the parent folder instead.", err=True)
4027
+ return
4028
+ elif is_folder is None and is_s3:
4029
+ click.echo("[WARNING] Unable to verify whether the S3 path is a folder. Proceeding with linking; however, if the operation fails, please confirm that you are linking a folder rather than a file.", err=True)
4030
+
4031
+ try:
4032
+ link_p.link_folder(path, session_id)
4033
+ except Exception as e:
4034
+ click.echo(f"[ERROR] Could not link folder: {e}", err=True)
4035
+ if is_s3:
4036
+ click.echo("If you are linking an S3 path, please ensure it is a folder.", err=True)
3876
4037
 
3877
4038
 
3878
4039
  @images.command(name="ls")
@@ -0,0 +1 @@
1
+ __version__ = '2.56.0'
@@ -2,12 +2,13 @@
2
2
  This is the main class of the package.
3
3
  """
4
4
 
5
+ from numpy import r_
5
6
  import requests
6
7
  import time
7
8
  import json
8
9
  from dataclasses import dataclass
9
10
  from cloudos_cli.utils.cloud import find_cloud
10
- from cloudos_cli.utils.errors import BadRequestException, JoBNotCompletedException, NotAuthorisedException
11
+ from cloudos_cli.utils.errors import BadRequestException, JoBNotCompletedException, NotAuthorisedException, JobAccessDeniedException
11
12
  from cloudos_cli.utils.requests import retry_requests_get, retry_requests_post, retry_requests_put
12
13
  import pandas as pd
13
14
  from cloudos_cli.utils.last_wf import youngest_workflow_id_by_name
@@ -192,6 +193,124 @@ class Cloudos:
192
193
  raise BadRequestException(contents_req)
193
194
  return contents_req.json()["contents"]
194
195
 
196
+ def get_job_workdir(self, j_id, workspace_id, verify=True):
197
+ """
198
+ Get the working directory for the specified job
199
+ """
200
+ cloudos_url = self.cloudos_url
201
+ apikey = self.apikey
202
+ headers = {
203
+ "Content-type": "application/json",
204
+ "apikey": apikey
205
+ }
206
+ r = retry_requests_get(f"{cloudos_url}/api/v1/jobs/{j_id}", headers=headers, verify=verify)
207
+ if r.status_code == 401:
208
+ raise NotAuthorisedException
209
+ elif r.status_code == 403:
210
+ # Handle 403 with more informative error message
211
+ self._handle_job_access_denied(j_id, workspace_id, verify)
212
+ elif r.status_code >= 400:
213
+ raise BadRequestException(r)
214
+ r_json = r.json()
215
+ job_workspace = r_json["team"]
216
+ if job_workspace != workspace_id:
217
+ raise ValueError("Workspace provided or configured is different from workspace where the job was executed")
218
+
219
+ # Check if logs field exists, if not fall back to original folder-based approach
220
+ if "logs" in r_json:
221
+ # Get workdir information from logs object using the same pattern as get_job_logs
222
+ logs_obj = r_json["logs"]
223
+ cloud_name, cloud_meta, cloud_storage = find_cloud(self.cloudos_url, self.apikey, workspace_id, logs_obj)
224
+ container_name = cloud_storage["container"]
225
+ prefix_name = cloud_storage["prefix"]
226
+ logs_bucket = logs_obj[container_name]
227
+ logs_path = logs_obj[prefix_name]
228
+
229
+ # Construct workdir path by replacing '/logs' with '/work' in the logs path
230
+ workdir_path_suffix = logs_path.replace('/logs', '/work')
231
+
232
+ if cloud_name == "aws":
233
+ workdir_path = f"s3://{logs_bucket}/{workdir_path_suffix}"
234
+ elif cloud_name == "azure":
235
+ storage_account_prefix = ''
236
+ cloude_scheme = cloud_storage["scheme"]
237
+ if cloude_scheme == 'az':
238
+ storage_account_prefix = f"az://{cloud_meta['storage']['storageAccount']}.blob.core.windows.net"
239
+ workdir_path = f"{storage_account_prefix}/{logs_bucket}/{workdir_path_suffix}"
240
+ else:
241
+ raise ValueError("Unsupported cloud provider")
242
+ return workdir_path
243
+ else:
244
+ # Fallback to original folder-based approach for backward compatibility
245
+ workdir_id = r_json["resumeWorkDir"]
246
+
247
+ # This will fail, as the API endpoint is not open. This works when adding
248
+ # the authorisation bearer token manually to the headers
249
+ workdir_bucket_r = retry_requests_get(f"{cloudos_url}/api/v1/folders",
250
+ params=dict(id=workdir_id, teamId=workspace_id),
251
+ headers=headers, verify=verify)
252
+ if workdir_bucket_r.status_code == 401:
253
+ raise NotAuthorisedException
254
+ elif workdir_bucket_r.status_code >= 400:
255
+ raise BadRequestException(workdir_bucket_r)
256
+
257
+ workdir_bucket_o = workdir_bucket_r.json()
258
+ if len(workdir_bucket_o) > 1:
259
+ raise ValueError(f"Request returned more than one result for folder id {workdir_id}")
260
+ workdir_bucket_info = workdir_bucket_o[0]
261
+ if workdir_bucket_info["folderType"] == "S3Folder":
262
+ cloud_name = "aws"
263
+ elif workdir_bucket_info["folderType"] == "AzureBlobFolder":
264
+ cloud_name = "azure"
265
+ else:
266
+ raise ValueError("Unsupported cloud provider")
267
+ if cloud_name == "aws":
268
+ bucket_name = workdir_bucket_info["s3BucketName"]
269
+ bucket_path = workdir_bucket_info["s3Prefix"]
270
+ workdir_path = f"s3://{bucket_name}/{bucket_path}"
271
+ elif cloud_name == "azure":
272
+ storage_account = f"az://{workspace_id}.blob.core.windows.net"
273
+ container_name = workdir_bucket_info["blobContainerName"]
274
+ blob_prefix = workdir_bucket_info["blobPrefix"]
275
+ workdir_path = f"{storage_account}/{container_name}/{blob_prefix}"
276
+ else:
277
+ raise ValueError("Unsupported cloud provider")
278
+ return workdir_path
279
+
280
+ def _handle_job_access_denied(self, job_id, workspace_id, verify=True):
281
+ """
282
+ Handle 403 errors with more informative messages by checking job ownership
283
+ """
284
+ try:
285
+ # Try to get current user info
286
+ current_user = self.get_user_info(verify)
287
+ current_user_name = f"{current_user.get('name', '')} {current_user.get('surname', '')}".strip()
288
+ if not current_user_name:
289
+ current_user_name = current_user.get('email', 'Unknown')
290
+ except Exception:
291
+ current_user_name = None
292
+
293
+ try:
294
+ # Try to get job info from job list to see the owner
295
+ jobs = self.get_job_list(workspace_id, last_n_jobs='all', verify=verify)
296
+ job_owner_name = None
297
+
298
+ for job in jobs:
299
+ if job.get('_id') == job_id:
300
+ user_info = job.get('user', {})
301
+ job_owner_name = f"{user_info.get('name', '')} {user_info.get('surname', '')}".strip()
302
+ if not job_owner_name:
303
+ job_owner_name = user_info.get('email', 'Unknown')
304
+ break
305
+
306
+ raise JobAccessDeniedException(job_id, job_owner_name, current_user_name)
307
+ except JobAccessDeniedException:
308
+ # Re-raise the specific exception
309
+ raise
310
+ except Exception:
311
+ # If we can't get detailed info, fall back to generic message
312
+ raise JobAccessDeniedException(job_id)
313
+
195
314
  def get_job_logs(self, j_id, workspace_id, verify=True):
196
315
  """
197
316
  Get the location of the logs for the specified job
@@ -984,7 +984,34 @@ class Job(Cloudos):
984
984
  return True
985
985
  return False
986
986
 
987
- def clone_job(self,
987
+ def get_resume_work_dir(self, job_id, verify=True):
988
+ """Get the resume work directory id for a job.
989
+
990
+ Parameters
991
+ ----------
992
+ job_id : str
993
+ The CloudOS job ID to get the resume work directory for.
994
+ verify : [bool|string]
995
+ Whether to use SSL verification or not. Alternatively, if
996
+ a string is passed, it will be interpreted as the path to
997
+ the SSL certificate file.
998
+
999
+ Returns
1000
+ -------
1001
+ str
1002
+ The resume work directory id.
1003
+ """
1004
+ headers = {
1005
+ "Content-type": "application/json",
1006
+ "apikey": self.apikey
1007
+ }
1008
+ url = f"{self.cloudos_url}/api/v1/jobs/{job_id}?teamId={self.workspace_id}"
1009
+ r = retry_requests_get(url, headers=headers, verify=verify)
1010
+ if r.status_code >= 400:
1011
+ raise BadRequestException(r)
1012
+ return json.loads(r.content)["resumeWorkDir"]
1013
+
1014
+ def clone_or_resume_job(self,
988
1015
  source_job_id,
989
1016
  queue_name=None,
990
1017
  cost_limit=None,
@@ -998,13 +1025,14 @@ class Job(Cloudos):
998
1025
  resumable=None,
999
1026
  project_name=None,
1000
1027
  parameters=None,
1001
- verify=True):
1002
- """Clone an existing job with optional parameter overrides.
1028
+ verify=True,
1029
+ mode=None):
1030
+ """Clone or resume an existing job with optional parameter overrides.
1003
1031
 
1004
1032
  Parameters
1005
1033
  ----------
1006
1034
  source_job_id : str
1007
- The CloudOS job ID to clone from.
1035
+ The CloudOS job ID to clone/resume from.
1008
1036
  queue_name : str, optional
1009
1037
  Name of the job queue to use.
1010
1038
  cost_limit : float, optional
@@ -1033,11 +1061,13 @@ class Job(Cloudos):
1033
1061
  Whether to use SSL verification or not. Alternatively, if
1034
1062
  a string is passed, it will be interpreted as the path to
1035
1063
  the SSL certificate file.
1036
-
1064
+ mode : str, optional
1065
+ The mode to use for the job (e.g. "clone", "resume").
1066
+
1037
1067
  Returns
1038
1068
  -------
1039
1069
  str
1040
- The CloudOS job ID of the cloned job.
1070
+ The CloudOS job ID of the cloned/resumed job.
1041
1071
  """
1042
1072
  # Get the original job payload
1043
1073
  original_payload = self.get_job_request_payload(source_job_id, verify=verify)
@@ -1048,6 +1078,11 @@ class Job(Cloudos):
1048
1078
  # remove unwanted fields
1049
1079
  del cloned_payload['_id']
1050
1080
  del cloned_payload['resourceId']
1081
+ if mode == "resume":
1082
+ try:
1083
+ cloned_payload['resumeWorkDir'] = self.get_resume_work_dir(source_job_id, verify=verify)
1084
+ except Exception as e:
1085
+ print(f"Failed to get resume work directory: {e}, the job was not set as resumable when originally run\n")
1051
1086
 
1052
1087
  # Override job name if provided
1053
1088
  if job_name:
@@ -1097,8 +1132,10 @@ class Job(Cloudos):
1097
1132
  print("[Message]: Azure workspace does not use fusion filesystem, option '--accelerate-file-staging' is ignored.\n")
1098
1133
 
1099
1134
  # Override resumable if provided
1100
- if resumable:
1135
+ if resumable and mode == "clone":
1101
1136
  cloned_payload['resumable'] = resumable
1137
+ elif resumable and mode == "resume":
1138
+ print("[Message]: 'resumable' option is only applicable when resuming a job, ignoring '--resumable' flag.\n")
1102
1139
 
1103
1140
  # Handle job queue override
1104
1141
  if queue_name:
@@ -1151,7 +1188,6 @@ class Job(Cloudos):
1151
1188
  "Content-type": "application/json",
1152
1189
  "apikey": self.apikey
1153
1190
  }
1154
-
1155
1191
  r = retry_requests_post(f"{self.cloudos_url}/api/v2/jobs?teamId={self.workspace_id}",
1156
1192
  data=json.dumps(cloned_payload),
1157
1193
  headers=headers,
@@ -1161,6 +1197,6 @@ class Job(Cloudos):
1161
1197
  raise BadRequestException(r)
1162
1198
 
1163
1199
  j_id = json.loads(r.content)["jobId"]
1164
- print('\tJob successfully cloned and launched to CloudOS, please check the ' +
1200
+ print(f'\tJob successfully {mode}d and launched to CloudOS, please check the ' +
1165
1201
  f"following link: {self.cloudos_url}/app/advanced-analytics/analyses/{j_id}\n")
1166
1202
  return j_id
@@ -78,3 +78,21 @@ class NoCloudForWorkspaceException(Exception):
78
78
  msg = f"Workspace ID {workspace_id} is not associated with supported cloud providers. Check the workspace ID"
79
79
  super(NoCloudForWorkspaceException, self).__init__(msg)
80
80
  self.workspace_id = workspace_id
81
+
82
+
83
+ class JobAccessDeniedException(Exception):
84
+ def __init__(self, job_id, job_owner_name=None, current_user_name=None):
85
+ if job_owner_name and current_user_name:
86
+ msg = (f"Access denied to job {job_id}. This job belongs to {job_owner_name}, "
87
+ f"but you are authenticated as {current_user_name}. "
88
+ f"You can only access jobs that belong to your account.")
89
+ elif job_owner_name:
90
+ msg = (f"Access denied to job {job_id}. This job belongs to {job_owner_name}. "
91
+ f"You can only access jobs that belong to your account.")
92
+ else:
93
+ msg = (f"Access denied to job {job_id}. You can only access jobs that belong to your account. "
94
+ f"This job may belong to another user or you may not have the required permissions.")
95
+ super(JobAccessDeniedException, self).__init__(msg)
96
+ self.job_id = job_id
97
+ self.job_owner_name = job_owner_name
98
+ self.current_user_name = current_user_name
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cloudos_cli
3
- Version: 2.53.0
3
+ Version: 2.56.0
4
4
  Summary: Python package for interacting with CloudOS
5
5
  Home-page: https://github.com/lifebit-ai/cloudos-cli
6
6
  Author: David Piñeyro
@@ -597,6 +597,34 @@ Executing results...
597
597
  results: s3://path/to/location/of/results/results/
598
598
  ```
599
599
 
600
+ #### Query working directory of job
601
+
602
+ To get the working directory of a job submitted to CloudOS:
603
+
604
+ ```shell
605
+ cloudos job workdir \
606
+ --apikey $MY_API_KEY \
607
+ --cloudos-url $CLOUDOS \
608
+ --job-id 62c83a1191fe06013b7ef355
609
+ ```
610
+
611
+ Or with a defined profile:
612
+
613
+ ```shell
614
+ cloudos job workdir \
615
+ --profile profile-name \
616
+ --job-id 62c83a1191fe06013b7ef355
617
+ ```
618
+
619
+ The output should be something similar to:
620
+
621
+ ```console
622
+ CloudOS job functionality: run, check and abort jobs in CloudOS.
623
+
624
+ Finding working directory path...
625
+ Working directory for job 68747bac9e7fe38ec6e022ad: az://123456789000.blob.core.windows.net/cloudos-987652349087/projects/455654676/jobs/54678856765/work
626
+ ```
627
+
600
628
  #### Abort single or multiple jobs from CloudOS
601
629
 
602
630
  Aborts jobs in the CloudOS workspace that are either running or initialising. It can be used with one or more job IDs provided as a comma separated string using the `--job-ids` parameter.
@@ -615,20 +643,20 @@ Aborting jobs...
615
643
  ```
616
644
 
617
645
 
618
- #### Clone a job with optional parameter overrides
646
+ #### Clone/resume a job with optional parameter overrides
619
647
 
620
- The `clone` command allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
648
+ The `clone` and `resume` commands allows you to create a new job based on an existing job's configuration, with the ability to override specific parameters. This is useful for re-running jobs with slight modifications without having to specify all parameters from scratch.
621
649
 
622
650
  Basic usage:
623
651
  ```console
624
- cloudos job clone \
652
+ cloudos job clone/resume \
625
653
  --profile MY_PROFILE
626
654
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7"
627
655
  ```
628
656
 
629
- Clone with parameter overrides:
657
+ Clone/resume with parameter overrides:
630
658
  ```console
631
- cloudos job clone \
659
+ cloudos job clone/resume \
632
660
  --profile MY_PROFILE
633
661
  --job-id "60a7b8c9d0e1f2g3h4i5j6k7" \
634
662
  --job-queue "high-priority-queue" \
@@ -649,7 +677,7 @@ Available override options:
649
677
  - `--job-queue`: Specify a different job queue
650
678
  - `--cost-limit`: Set a new cost limit (use -1 for no limit)
651
679
  - `--instance-type`: Change the master instance type
652
- - `--job-name`: Assign a custom name to the cloned job
680
+ - `--job-name`: Assign a custom name to the cloned/resumed job
653
681
  - `--nextflow-version`: Use a different Nextflow version
654
682
  - `--git-branch`: Switch to a different git branch
655
683
  - `--nextflow-profile`: Change the Nextflow profile
@@ -1 +0,0 @@
1
- __version__ = '2.53.0'
File without changes
File without changes
File without changes