cloud-files 5.5.0__py3-none-any.whl → 5.6.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: cloud-files
3
- Version: 5.5.0
3
+ Version: 5.6.1
4
4
  Summary: Fast access to cloud storage and local FS.
5
5
  Home-page: https://github.com/seung-lab/cloud-files/
6
6
  Author: William Silversmith
@@ -30,6 +30,8 @@ Requires-Dist: google-auth >=1.10.0
30
30
  Requires-Dist: google-cloud-core >=1.1.0
31
31
  Requires-Dist: google-cloud-storage >=1.31.1
32
32
  Requires-Dist: google-crc32c >=1.0.0
33
+ Requires-Dist: intervaltree
34
+ Requires-Dist: numpy
33
35
  Requires-Dist: orjson
34
36
  Requires-Dist: pathos
35
37
  Requires-Dist: protobuf >=3.3.0
@@ -41,6 +43,12 @@ Requires-Dist: urllib3 >=1.26.3
41
43
  Requires-Dist: zstandard
42
44
  Requires-Dist: rsa >=4.7.2
43
45
  Requires-Dist: fasteners
46
+ Provides-Extra: apache
47
+ Requires-Dist: lxml ; extra == 'apache'
48
+ Provides-Extra: monitoring
49
+ Requires-Dist: psutil ; extra == 'monitoring'
50
+ Requires-Dist: intervaltree ; extra == 'monitoring'
51
+ Requires-Dist: matplotlib ; extra == 'monitoring'
44
52
  Provides-Extra: numpy
45
53
  Requires-Dist: numpy ; extra == 'numpy'
46
54
  Provides-Extra: test
@@ -97,8 +105,20 @@ cf.touch([ "example", "example2" ])
97
105
  cf = CloudFile("gs://bucket/file1")
98
106
  info = cf.head()
99
107
  binary = cf.get()
108
+ obj = cf.get_json()
100
109
  cf.put(binary)
110
+ cf.put_json()
101
111
  cf[:30] # get first 30 bytes of file
112
+
113
+ num_bytes = cf.size() # get size in bytes (also in head)
114
+ exists = cf.exists() # true or false
115
+ cf.delete() # deletes the file
116
+ cf.touch() # create the file if it doesn't exist
117
+ cf.move("gs://example/destination/directory") # copy then delete source
118
+ cf.transfer_from("gs://example/source/file.txt") # copies file efficiently
119
+ cf.transfer_to("gs://example/dest/file.txt") # copies file efficiently
120
+
121
+ path = cf.join([ path1, path2, path3 ]) # use the appropriate path separator
102
122
  ```
103
123
 
104
124
  CloudFiles was developed to access files from object storage without ever touching disk. The goal was to reliably and rapidly access a petabyte of image data broken down into tens to hundreds of millions of files being accessed in parallel across thousands of cores. CloudFiles has been used to processes dozens of images, many of which were in the hundreds of terabyte range. It has reliably read and written tens of billions of files to date.
@@ -124,6 +144,7 @@ CloudFiles was developed to access files from object storage without ever touchi
124
144
  ```bash
125
145
  pip install cloud-files
126
146
  pip install cloud-files[test] # to enable testing with pytest
147
+ pip install cloud-files[monitoring] # enable plotting network performance
127
148
  ```
128
149
 
129
150
  If you run into trouble installing dependenies, make sure you're using at least Python3.6 and you have updated pip. On Linux, some dependencies require manylinux2010 or manylinux2014 binaries which earlier versions of pip do not search for. MacOS, Linux, and Windows are supported platforms.
@@ -176,7 +197,9 @@ You can create the `google-secret.json` file [here](https://console.cloud.google
176
197
 
177
198
  ## API Documentation
178
199
 
179
- Note that the "Cloud Costs" mentioned below are current as of June 2020 and are subject to change. As of this writing, S3 and Google use identical cost structures for these operations.
200
+ Note that the "Cloud Costs" mentioned below are current as of June 2020 and are subject to change. As of this writing, S3 and Google use identical cost structures for these operations.
201
+
202
+ `CloudFile` is a more intuitive version of `CloudFiles` designed for managing single files instead of groups of files. See examples above. There is an analogus method for each `CloudFiles` method (where it makes sense).
180
203
 
181
204
  ### Constructor
182
205
  ```python
@@ -236,6 +259,10 @@ binary = cf['filename'][0:5] # same result, fetches 11 bytes
236
259
  >> b'hello' # represents byte range 0-4 inclusive of filename
237
260
 
238
261
  binaries = cf[:100] # download the first 100 files
262
+
263
+ # Get the TransmissionMonitor object that records
264
+ # the flight time of each file.
265
+ binaries, tm = cf.get(..., return_recording=True)
239
266
  ```
240
267
 
241
268
  `get` supports several different styles of input. The simplest takes a scalar filename and returns the contents of that file. However, you can also specify lists of filenames, a byte range request, and lists of byte range requests. You can provide a generator or iterator as input as well. Order is not guaranteed.
@@ -265,6 +292,10 @@ cf.puts([{
265
292
  cf.puts([ (path, content), (path, content) ], compression='gzip')
266
293
  cf.put_jsons(...)
267
294
 
295
+ # Get the TransmissionMonitor object that records the
296
+ # flight times of each object.
297
+ _, tm = cf.puts(..., return_recording=True)
298
+
268
299
  # Definition of put, put_json is identical
269
300
  def put(
270
301
  self,
@@ -469,6 +500,12 @@ cloudfiles -p 2 cp --progress -r s3://bkt/ gs://bkt2/
469
500
  cloudfiles cp -c br s3://bkt/file.txt gs://bkt2/
470
501
  # decompress
471
502
  cloudfiles cp -c none s3://bkt/file.txt gs://bkt2/
503
+ # save chart of file flight times
504
+ cloudfiles cp --flight-time s3://bkt/file.txt gs://bkt2/
505
+ # save a chart of estimated bandwidth usage from these files alone
506
+ cloudfiles cp --io-rate s3://bkt/file.txt gs://bkt2/
507
+ # save a chart of measured bandwidth usage for the machine
508
+ cloudfiles cp --machine-io-rate s3://bkt/file.txt gs://bkt2/
472
509
  # move or rename files
473
510
  cloudfiles mv s3://bkt/file.txt gs://bkt2/
474
511
  # create an empty file if not existing
@@ -528,6 +565,40 @@ cloudfiles alias rm example # remove example://
528
565
 
529
566
  The alias file is only accessed (and cached) if CloudFiles encounters an unknown protocol. If you stick to default protocols and use the syntax `s3://https://example.com/` for alternative endpoints, you can still use CloudFiles in environments without filesystem access.
530
567
 
568
+ ## Performance Monitoring
569
+
570
+ CloudFiles now comes with two tools inside of `cloudfiles.monitoring` for measuring the performance of transfer operations both via the CLI and the programatic interface.
571
+
572
+ A `TransmissionMonitor` object is created during each download or upload (e.g. `cf.get` or `cf.puts`) call. You can access this object by using the `return_recording=True` flag. This object saves the flight times of each object along with its size in an interval tree datastructure. It comes with methods for estimating the peak bits per a second and can plot both time of flight and the estimated transfer rates (assuming the transfer is evenly divided over the flight of an object, an assumption that is not always true). This allows you to estimate the contribution of a given CloudFiles operation to a machine's network IO.
573
+
574
+ ```python
575
+ from cloudfiles import CloudFiles
576
+
577
+ ...
578
+
579
+ results, tm = cf.get([ ... some files ... ], return_recording=True)
580
+
581
+ value = tm.peak_Mbps() # estimated peak transfer rate
582
+ value = tm.total_Mbps() # estimated average transfer rate
583
+ tm.plot_gantt() # time of flight chart
584
+ tm.plot_histogram() # transfer rate chart
585
+ ```
586
+
587
+ A second object, `IOSampler`, can sample the OS network counters using a background thread and provides a global view of the machine's network performance during the life of the transfer. It is enabled on the CLI for the `cp` command when the `--machine-io-rate` flag is enabled, but must be manually started programatically. This is to avoid accidentally starting unnecessary sampling threads. The samples are accumulated into a circular buffer, so make sure to set the buffer length long enough for your points of interest to be captured.
588
+
589
+ ```python
590
+ from cloudfiles.monitoring import IOSampler
591
+
592
+ sampler = IOSampler(buffer_sec=600, interval=0.25)
593
+ sampler.start_sampling()
594
+
595
+ ...
596
+
597
+
598
+ sampler.stop_sampling()
599
+ sampler.plot_histogram()
600
+ ```
601
+
531
602
  ## Credits
532
603
 
533
604
  CloudFiles is derived from the [CloudVolume.Storage](https://github.com/seung-lab/cloud-volume/tree/master/cloudvolume/storage) system.
@@ -1,11 +1,12 @@
1
1
  cloudfiles/__init__.py,sha256=pLB4CcV2l3Jgv_ni1520Np1pfzFj8Cpr87vNxFT3rNI,493
2
- cloudfiles/cloudfiles.py,sha256=nUIW-SDyeldvmUu_SETJXS4x38Nv3Kj7TBwMwLQhjns,51196
2
+ cloudfiles/cloudfiles.py,sha256=tPG1PBLEjABPu-KLe93yf6xW_zbafPsQ6z5NuofyUoU,56743
3
3
  cloudfiles/compression.py,sha256=WXJHnoNLJ_NWyoY9ygZmFA2qMou35_9xS5dzF7-2H-M,6262
4
4
  cloudfiles/connectionpools.py,sha256=aL8RiSjRepECfgAFmJcz80aJFKbou7hsbuEgugDKwB8,4814
5
5
  cloudfiles/exceptions.py,sha256=N0oGQNG-St6RvnT8e5p_yC_E61q2kgAe2scwAL0F49c,843
6
6
  cloudfiles/gcs.py,sha256=unqu5KxGKaPq6N4QeHSpCDdtnK1BzPOAerTZ8FLt2_4,3820
7
- cloudfiles/interfaces.py,sha256=3_ybsCKf74DPpI5Vv-BwFLLzc1cLUMfZLBtIKWkfoHQ,44478
7
+ cloudfiles/interfaces.py,sha256=5rUh2DWOVlg13fAxyZ0wAaQyfW04xc2zlUfTItFV-zQ,45325
8
8
  cloudfiles/lib.py,sha256=HHjCvjmOjA0nZWSvHGoSeYpxqd6FAG8xk8LM212LAUA,5382
9
+ cloudfiles/monitoring.py,sha256=N5Xq0PYZK1OxoYtwBFsnnfaq7dElTgY8Rn2Ez_I3aoo,20897
9
10
  cloudfiles/paths.py,sha256=HOvtdLSIYGwlwvnZt9d_Ez3TXOe7WWd18bZNDpExUDQ,12231
10
11
  cloudfiles/resumable_tools.py,sha256=NyuSoGh1SaP5akrHCpd9kgy2-JruEWrHW9lvJxV7jpE,6711
11
12
  cloudfiles/scheduler.py,sha256=ioqBT5bMPCVHEHlnL-SW_wHulxGgjeThiKHlnaDOydo,3831
@@ -15,12 +16,12 @@ cloudfiles/threaded_queue.py,sha256=Nl4vfXhQ6nDLF8PZpSSBpww0M2zWtcd4DLs3W3BArBw,
15
16
  cloudfiles/typing.py,sha256=f3ZYkNfN9poxhGu5j-P0KCxjCCqSn9HAg5KiIPkjnCg,416
16
17
  cloudfiles_cli/LICENSE,sha256=Jna4xYE8CCQmaxjr5Fs-wmUBnIQJ1DGcNn9MMjbkprk,1538
17
18
  cloudfiles_cli/__init__.py,sha256=Wftt3R3F21QsHtWqx49ODuqT9zcSr0em7wk48kcH0WM,29
18
- cloudfiles_cli/cloudfiles_cli.py,sha256=L9xr_z9GCf5PcpmA21Iandim7uMviaLxTZ4Pf1B9Ui0,34865
19
- cloud_files-5.5.0.dist-info/AUTHORS,sha256=BFVmobgAhaVFI5fqbuqAY5XmBQxe09ZZAsAOTy87hKQ,318
20
- cloud_files-5.5.0.dist-info/LICENSE,sha256=Jna4xYE8CCQmaxjr5Fs-wmUBnIQJ1DGcNn9MMjbkprk,1538
21
- cloud_files-5.5.0.dist-info/METADATA,sha256=m3Rlk29z4XnqeLhpKX613g7KMiT-5vGVNsWpjXNfXmU,26970
22
- cloud_files-5.5.0.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
23
- cloud_files-5.5.0.dist-info/entry_points.txt,sha256=xlirb1FVhn1mbcv4IoyMEGumDqKOA4VMVd3drsRQxIg,51
24
- cloud_files-5.5.0.dist-info/pbr.json,sha256=k1LwgnAaBImOMXKnlbOx2sBAWZ_Jm5AdFAhAugTuvrY,46
25
- cloud_files-5.5.0.dist-info/top_level.txt,sha256=xPyrST3okJbsmdCF5IC2gYAVxg_aD5AYVTnNo8UuoZU,26
26
- cloud_files-5.5.0.dist-info/RECORD,,
19
+ cloudfiles_cli/cloudfiles_cli.py,sha256=IT6CHH7rI2B79ecV526uFa6ruAC8Yg25OO36Uwfxenc,37464
20
+ cloud_files-5.6.1.dist-info/AUTHORS,sha256=BFVmobgAhaVFI5fqbuqAY5XmBQxe09ZZAsAOTy87hKQ,318
21
+ cloud_files-5.6.1.dist-info/LICENSE,sha256=Jna4xYE8CCQmaxjr5Fs-wmUBnIQJ1DGcNn9MMjbkprk,1538
22
+ cloud_files-5.6.1.dist-info/METADATA,sha256=AtO_CMUdfIoENAA2dteG4C0R0SBobuLaxjGslNLS1hI,30530
23
+ cloud_files-5.6.1.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
24
+ cloud_files-5.6.1.dist-info/entry_points.txt,sha256=xlirb1FVhn1mbcv4IoyMEGumDqKOA4VMVd3drsRQxIg,51
25
+ cloud_files-5.6.1.dist-info/pbr.json,sha256=9w298syusRXCWa2l5yRkPwkdr43ov_GG0cxCaTBLnmg,46
26
+ cloud_files-5.6.1.dist-info/top_level.txt,sha256=xPyrST3okJbsmdCF5IC2gYAVxg_aD5AYVTnNo8UuoZU,26
27
+ cloud_files-5.6.1.dist-info/RECORD,,
@@ -0,0 +1 @@
1
+ {"git_version": "5554b92", "is_release": true}