lfss 0.9.2__py3-none-any.whl → 0.9.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Readme.md CHANGED
@@ -1,4 +1,4 @@
1
- # Lightweight File Storage Service (LFSS)
1
+ # Lite File Storage Service (LFSS)
2
2
  [![PyPI](https://img.shields.io/pypi/v/lfss)](https://pypi.org/project/lfss/)
3
3
 
4
4
  My experiment on a lightweight and high-performance file/object storage service...
@@ -32,8 +32,8 @@ Or, you can start a web server at `/frontend` and open `index.html` in your brow
32
32
 
33
33
  The API usage is simple, just `GET`, `PUT`, `DELETE` to the `/<username>/file/url` path.
34
34
  The authentication can be acheived through one of the following methods:
35
- 1. `Authorization` header with the value `Bearer sha256(<username><password>)`.
36
- 2. `token` query parameter with the value `sha256(<username><password>)`.
35
+ 1. `Authorization` header with the value `Bearer sha256(<username>:<password>)`.
36
+ 2. `token` query parameter with the value `sha256(<username>:<password>)`.
37
37
  3. HTTP Basic Authentication with the username and password (If WebDAV is enabled).
38
38
 
39
39
  You can refer to `frontend` as an application example, `lfss/api/connector.py` for more APIs.
docs/Changelog.md ADDED
@@ -0,0 +1,31 @@
1
+
2
+ ## 0.9
3
+
4
+ ### 0.9.5
5
+ - Stream bundle path as zip file.
6
+ - Update authentication token hash format (need to reset password).
7
+
8
+ ### 0.9.4
9
+ - Decode WebDAV file name.
10
+ - Allow root-listing for WebDAV.
11
+ - Always return 207 status code for propfind.
12
+ - Refactor debounce utility.
13
+
14
+ ### 0.9.3
15
+ - Fix empty file getting.
16
+ - HTTP `PUT/POST` default to overwrite the file.
17
+ - Use shared implementations for `PUT`, `GET`, `DELETE` methods.
18
+ - Inherit permission on overwriting `unset` permission files.
19
+
20
+ ### 0.9.2
21
+ - Native copy function.
22
+ - Only enable basic authentication if WebDAV is enabled.
23
+ - `WWW-Authenticate` header is now added to the response when authentication fails.
24
+
25
+ ### 0.9.1
26
+ - Add WebDAV support.
27
+ - Code refactor, use `lfss.eng` and `lfss.svc`.
28
+
29
+ ### 0.9.0
30
+ - User peer access control, now user can share their path with other users.
31
+ - Fix high concurrency database locking on file getting.
@@ -9,4 +9,4 @@
9
9
 
10
10
  **Client**
11
11
  - `LFSS_ENDPOINT`: The fallback server endpoint. Default is `http://localhost:8000`.
12
- - `LFSS_TOKEN`: The fallback token to authenticate. Should be `sha256(<username><password>)`.
12
+ - `LFSS_TOKEN`: The fallback token to authenticate. Should be `sha256(<username>:<password>)`.
docs/Webdav.md CHANGED
@@ -15,8 +15,8 @@ Please note:
15
15
  2. LFSS not allow creating files in the root directory, however some client such as [Finder](https://sabre.io/dav/clients/finder/) will try to create files in the root directory. Thus, it is safer to mount the user directory only, e.g. `http://localhost:8000/<username>/`.
16
16
  3. LFSS not allow directory creation, instead it creates directoy implicitly when a file is uploaded to a non-exist directory.
17
17
  i.e. `PUT http://localhost:8000/<username>/dir/file.txt` will create the `dir` directory if it does not exist.
18
- However, the WebDAV `MKCOL` method requires the directory to be created explicitly, so WebDAV `MKCOL` method instead create a decoy file on the path (`.lfss-keep`), and hide the file from the file listing by `PROPFIND` method.
18
+ However, the WebDAV `MKCOL` method requires the directory to be created explicitly, so WebDAV `MKCOL` method instead create a decoy file on the path (`.lfss_keep`), and hide the file from the file listing by `PROPFIND` method.
19
19
  This leads to:
20
- 1) You may see a `.lfss-keep` file in the directory with native file listing (e.g. `/_api/list-files`), but it is hidden in WebDAV clients.
21
- 2) The directory may be deleted if there is no file in it and the `.lfss-keep` file is not created by WebDAV client.
20
+ 1) You may see a `.lfss_keep` file in the directory with native file listing (e.g. `/_api/list-files`), but it is hidden in WebDAV clients.
21
+ 2) The directory may be deleted if there is no file in it and the `.lfss_keep` file is not created by WebDAV client.
22
22
 
lfss/api/connector.py CHANGED
@@ -1,5 +1,6 @@
1
1
  from __future__ import annotations
2
- from typing import Optional, Literal, Iterator
2
+ from typing import Optional, Literal
3
+ from collections.abc import Iterator
3
4
  import os
4
5
  import requests
5
6
  import requests.adapters
@@ -276,6 +277,14 @@ class Connector:
276
277
  self._fetch_factory('POST', '_api/copy', {'src': src, 'dst': dst})(
277
278
  headers = {'Content-Type': 'application/www-form-urlencoded'}
278
279
  )
280
+
281
+ def bundle(self, path: str) -> Iterator[bytes]:
282
+ """Bundle a path into a zip file."""
283
+ response = self._fetch_factory('GET', '_api/bundle', {'path': path})(
284
+ headers = {'Content-Type': 'application/www-form-urlencoded'},
285
+ stream = True
286
+ )
287
+ return response.iter_content(chunk_size=1024)
279
288
 
280
289
  def whoami(self) -> UserRecord:
281
290
  """Gets information about the current user."""
lfss/eng/config.py CHANGED
@@ -19,7 +19,6 @@ if __env_large_file is not None:
19
19
  else:
20
20
  LARGE_FILE_BYTES = 8 * 1024 * 1024 # 8MB
21
21
  MAX_MEM_FILE_BYTES = 128 * 1024 * 1024 # 128MB
22
- MAX_BUNDLE_BYTES = 512 * 1024 * 1024 # 512MB
23
22
  CHUNK_SIZE = 1024 * 1024 # 1MB chunks for streaming (on large files)
24
23
  DEBUG_MODE = os.environ.get('LFSS_DEBUG', '0') == '1'
25
24
 
@@ -29,7 +29,7 @@ async def get_connection(read_only: bool = False) -> aiosqlite.Connection:
29
29
 
30
30
  conn = await aiosqlite.connect(
31
31
  get_db_uri(DATA_HOME / 'index.db', read_only=read_only),
32
- timeout = 20, uri = True
32
+ timeout = 10, uri = True
33
33
  )
34
34
  async with conn.cursor() as c:
35
35
  await c.execute(
lfss/eng/database.py CHANGED
@@ -1,10 +1,11 @@
1
1
 
2
- from typing import Optional, Literal, AsyncIterable, overload
2
+ from typing import Optional, Literal, overload
3
+ from collections.abc import AsyncIterable
3
4
  from contextlib import asynccontextmanager
4
5
  from abc import ABC
5
6
 
7
+ import uuid, datetime
6
8
  import urllib.parse
7
- import uuid
8
9
  import zipfile, io, asyncio
9
10
 
10
11
  import aiosqlite, aiofiles
@@ -82,9 +83,11 @@ class UserConn(DBObjectBase):
82
83
  self, username: str, password: str, is_admin: bool = False,
83
84
  max_storage: int = 1073741824, permission: FileReadPermission = FileReadPermission.UNSET
84
85
  ) -> int:
85
- assert not username.startswith('_'), "Error: reserved username"
86
- assert not ('/' in username or len(username) > 255), "Invalid username"
87
- assert urllib.parse.quote(username) == username, "Invalid username, must be URL safe"
86
+ def validate_username(username: str):
87
+ assert not username.startswith('_'), "Error: reserved username"
88
+ assert not ('/' in username or ':' in username or len(username) > 255), "Invalid username"
89
+ assert urllib.parse.quote(username) == username, "Invalid username, must be URL safe"
90
+ validate_username(username)
88
91
  self.logger.debug(f"Creating user {username}")
89
92
  credential = hash_credential(username, password)
90
93
  assert await self.get_user(username) is None, "Duplicate username"
@@ -161,7 +164,7 @@ class UserConn(DBObjectBase):
161
164
  async def list_peer_users(self, src_user: int | str, level: AccessLevel) -> list[UserRecord]:
162
165
  """
163
166
  List all users that src_user can do [AliasLevel] to, with level >= level,
164
- Note: the returned list does not include src_user and admin users
167
+ Note: the returned list does not include src_user and is not apporiate for admin (who has all permissions for all users)
165
168
  """
166
169
  assert int(level) > AccessLevel.NONE, f"Invalid level, {level}"
167
170
  match src_user:
@@ -427,8 +430,7 @@ class FileConn(DBObjectBase):
427
430
  await self._user_size_inc(user_id, old.file_size)
428
431
  self.logger.info(f"Copied file {old_url} to {new_url}")
429
432
 
430
- # not tested
431
- async def copy_path(self, old_url: str, new_url: str, conflict_handler: Literal['skip', 'overwrite'] = 'overwrite', user_id: Optional[int] = None):
433
+ async def copy_path(self, old_url: str, new_url: str, user_id: Optional[int] = None):
432
434
  assert old_url.endswith('/'), "Old path must end with /"
433
435
  assert new_url.endswith('/'), "New path must end with /"
434
436
  if user_id is None:
@@ -440,11 +442,8 @@ class FileConn(DBObjectBase):
440
442
  for r in res:
441
443
  old_record = FileRecord(*r)
442
444
  new_r = new_url + old_record.url[len(old_url):]
443
- if conflict_handler == 'overwrite':
444
- await self.cur.execute("DELETE FROM fmeta WHERE url = ?", (new_r, ))
445
- elif conflict_handler == 'skip':
446
- if (await self.cur.execute("SELECT url FROM fmeta WHERE url = ?", (new_r, ))) is not None:
447
- continue
445
+ if await (await self.cur.execute("SELECT url FROM fmeta WHERE url = ?", (new_r, ))).fetchone() is not None:
446
+ raise FileExistsError(f"File {new_r} already exists")
448
447
  new_fid = str(uuid.uuid4())
449
448
  user_id = old_record.owner_id if user_id is None else user_id
450
449
  await self.cur.execute(
@@ -456,6 +455,7 @@ class FileConn(DBObjectBase):
456
455
  else:
457
456
  await copy_file(LARGE_BLOB_DIR / old_record.file_id, LARGE_BLOB_DIR / new_fid)
458
457
  await self._user_size_inc(user_id, old_record.file_size)
458
+ self.logger.info(f"Copied path {old_url} to {new_url}")
459
459
 
460
460
  async def move_file(self, old_url: str, new_url: str):
461
461
  old = await self.get_file_record(old_url)
@@ -858,7 +858,7 @@ class Database:
858
858
 
859
859
  async with transaction() as cur:
860
860
  fconn = FileConn(cur)
861
- await fconn.copy_path(old_url, new_url, 'overwrite', op_user.id)
861
+ await fconn.copy_path(old_url, new_url, op_user.id)
862
862
 
863
863
  async def __batch_delete_file_blobs(self, fconn: FileConn, file_records: list[FileRecord], batch_size: int = 512):
864
864
  # https://github.com/langchain-ai/langchain/issues/10321
@@ -929,14 +929,45 @@ class Database:
929
929
  else:
930
930
  blob = await fconn.get_file_blob(f_id)
931
931
  yield r, blob
932
+
933
+ async def zip_path_stream(self, top_url: str, op_user: Optional[UserRecord] = None) -> AsyncIterable[bytes]:
934
+ from stat import S_IFREG
935
+ from stream_zip import async_stream_zip, ZIP_64
936
+ if top_url.startswith('/'):
937
+ top_url = top_url[1:]
938
+
939
+ if op_user:
940
+ if await check_path_permission(top_url, op_user) < AccessLevel.READ:
941
+ raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot zip path {top_url}")
942
+
943
+ # https://stream-zip.docs.trade.gov.uk/async-interface/
944
+ async def data_iter():
945
+ async for (r, blob) in self.iter_path(top_url, None):
946
+ rel_path = r.url[len(top_url):]
947
+ rel_path = decode_uri_compnents(rel_path)
948
+ b_iter: AsyncIterable[bytes]
949
+ if isinstance(blob, bytes):
950
+ async def blob_iter(): yield blob
951
+ b_iter = blob_iter() # type: ignore
952
+ else:
953
+ assert isinstance(blob, AsyncIterable)
954
+ b_iter = blob
955
+ yield (
956
+ rel_path,
957
+ datetime.datetime.now(),
958
+ S_IFREG | 0o600,
959
+ ZIP_64,
960
+ b_iter
961
+ )
962
+ return async_stream_zip(data_iter())
932
963
 
933
964
  @concurrent_wrap()
934
- async def zip_path(self, top_url: str, urls: Optional[list[str]]) -> io.BytesIO:
965
+ async def zip_path(self, top_url: str, op_user: Optional[UserRecord]) -> io.BytesIO:
935
966
  if top_url.startswith('/'):
936
967
  top_url = top_url[1:]
937
968
  buffer = io.BytesIO()
938
969
  with zipfile.ZipFile(buffer, 'w') as zf:
939
- async for (r, blob) in self.iter_path(top_url, urls):
970
+ async for (r, blob) in self.iter_path(top_url, None):
940
971
  rel_path = r.url[len(top_url):]
941
972
  rel_path = decode_uri_compnents(rel_path)
942
973
  if r.external:
lfss/eng/error.py CHANGED
@@ -6,6 +6,10 @@ class FileLockedError(LFSSExceptionBase):...
6
6
 
7
7
  class InvalidOptionsError(LFSSExceptionBase, ValueError):...
8
8
 
9
+ class InvalidDataError(LFSSExceptionBase, ValueError):...
10
+
11
+ class InvalidPathError(LFSSExceptionBase, ValueError):...
12
+
9
13
  class DatabaseLockedError(LFSSExceptionBase, sqlite3.DatabaseError):...
10
14
 
11
15
  class PathNotFoundError(LFSSExceptionBase, FileNotFoundError):...
@@ -14,8 +18,6 @@ class FileDuplicateError(LFSSExceptionBase, FileExistsError):...
14
18
 
15
19
  class PermissionDeniedError(LFSSExceptionBase, PermissionError):...
16
20
 
17
- class InvalidPathError(LFSSExceptionBase, ValueError):...
18
-
19
21
  class StorageExceededError(LFSSExceptionBase):...
20
22
 
21
23
  class TooManyItemsError(LFSSExceptionBase):...
lfss/eng/thumb.py CHANGED
@@ -1,5 +1,6 @@
1
1
  from lfss.eng.config import THUMB_DB, THUMB_SIZE
2
2
  from lfss.eng.database import FileConn
3
+ from lfss.eng.error import *
3
4
  from lfss.eng.connection_pool import unique_cursor
4
5
  from typing import Optional
5
6
  from PIL import Image
@@ -32,7 +33,10 @@ async def _get_cache_thumb(c: aiosqlite.Cursor, path: str, ctime: str) -> Option
32
33
  return blob
33
34
 
34
35
  async def _save_cache_thumb(c: aiosqlite.Cursor, path: str, ctime: str, raw_bytes: bytes) -> bytes:
35
- raw_img = Image.open(BytesIO(raw_bytes))
36
+ try:
37
+ raw_img = Image.open(BytesIO(raw_bytes))
38
+ except Exception:
39
+ raise InvalidDataError('Invalid image data for thumbnail: ' + path)
36
40
  raw_img.thumbnail(THUMB_SIZE)
37
41
  img = raw_img.convert('RGB')
38
42
  bio = BytesIO()
lfss/eng/utils.py CHANGED
@@ -20,7 +20,7 @@ async def copy_file(source: str|pathlib.Path, destination: str|pathlib.Path):
20
20
  await dest.write(chunk)
21
21
 
22
22
  def hash_credential(username: str, password: str):
23
- return hashlib.sha256((username + password).encode()).hexdigest()
23
+ return hashlib.sha256(f"{username}:{password}".encode()).hexdigest()
24
24
 
25
25
  def encode_uri_compnents(path: str):
26
26
  path_sp = path.split("/")
@@ -36,17 +36,41 @@ def ensure_uri_compnents(path: str):
36
36
  """ Ensure the path components are safe to use """
37
37
  return encode_uri_compnents(decode_uri_compnents(path))
38
38
 
39
- g_debounce_tasks: OrderedDict[str, asyncio.Task] = OrderedDict()
40
- lock_debounce_task_queue = Lock()
39
+ class TaskManager:
40
+ def __init__(self):
41
+ self._tasks: OrderedDict[str, asyncio.Task] = OrderedDict()
42
+
43
+ def push(self, task: asyncio.Task) -> str:
44
+ tid = uuid4().hex
45
+ if tid in self._tasks:
46
+ raise ValueError("Task ID collision")
47
+ self._tasks[tid] = task
48
+ return tid
49
+
50
+ def cancel(self, task_id: str):
51
+ task = self._tasks.pop(task_id, None)
52
+ if task is not None:
53
+ task.cancel()
54
+
55
+ def truncate(self):
56
+ new_tasks = OrderedDict()
57
+ for tid, task in self._tasks.items():
58
+ if not task.done():
59
+ new_tasks[tid] = task
60
+ self._tasks = new_tasks
61
+
62
+ async def wait_all(self):
63
+ async def stop_task(task: asyncio.Task):
64
+ if not task.done():
65
+ await task
66
+ await asyncio.gather(*map(stop_task, self._tasks.values()))
67
+ self._tasks.clear()
68
+
69
+ def __len__(self): return len(self._tasks)
70
+
71
+ g_debounce_tasks: TaskManager = TaskManager()
41
72
  async def wait_for_debounce_tasks():
42
- async def stop_task(task: asyncio.Task):
43
- task.cancel()
44
- try:
45
- await task
46
- except asyncio.CancelledError:
47
- pass
48
- await asyncio.gather(*map(stop_task, g_debounce_tasks.values()))
49
- g_debounce_tasks.clear()
73
+ await g_debounce_tasks.wait_all()
50
74
 
51
75
  def debounce_async(delay: float = 0.1, max_wait: float = 1.):
52
76
  """
@@ -54,7 +78,8 @@ def debounce_async(delay: float = 0.1, max_wait: float = 1.):
54
78
  ensuring execution at least once every `max_wait` seconds.
55
79
  """
56
80
  def debounce_wrap(func):
57
- task_record: tuple[str, asyncio.Task] | None = None
81
+ # task_record: tuple[str, asyncio.Task] | None = None
82
+ prev_task_id = None
58
83
  fn_execution_lock = Lock()
59
84
  last_execution_time = 0
60
85
 
@@ -67,12 +92,11 @@ def debounce_async(delay: float = 0.1, max_wait: float = 1.):
67
92
 
68
93
  @functools.wraps(func)
69
94
  async def wrapper(*args, **kwargs):
70
- nonlocal task_record, last_execution_time
95
+ nonlocal prev_task_id, last_execution_time
71
96
 
72
- async with lock_debounce_task_queue:
73
- if task_record is not None:
74
- task_record[1].cancel()
75
- g_debounce_tasks.pop(task_record[0], None)
97
+ if prev_task_id is not None:
98
+ g_debounce_tasks.cancel(prev_task_id)
99
+ prev_task_id = None
76
100
 
77
101
  async with fn_execution_lock:
78
102
  if time.monotonic() - last_execution_time > max_wait:
@@ -81,14 +105,12 @@ def debounce_async(delay: float = 0.1, max_wait: float = 1.):
81
105
  return
82
106
 
83
107
  task = asyncio.create_task(delayed_func(*args, **kwargs))
84
- task_uid = uuid4().hex
85
- task_record = (task_uid, task)
86
- async with lock_debounce_task_queue:
87
- g_debounce_tasks[task_uid] = task
88
- if len(g_debounce_tasks) > 2048:
89
- # finished tasks are not removed from the dict
90
- # so we need to clear it periodically
91
- await wait_for_debounce_tasks()
108
+ prev_task_id = g_debounce_tasks.push(task)
109
+ if len(g_debounce_tasks) > 1024:
110
+ # finished tasks are not removed from the dict
111
+ # so we need to clear it periodically
112
+ g_debounce_tasks.truncate()
113
+
92
114
  return wrapper
93
115
  return debounce_wrap
94
116
 
@@ -133,7 +155,7 @@ def fmt_storage_size(size: int) -> str:
133
155
  return f"{size/1024**4:.2f}T"
134
156
 
135
157
  _FnReturnT = TypeVar('_FnReturnT')
136
- _AsyncReturnT = Awaitable[_FnReturnT]
158
+ _AsyncReturnT = TypeVar('_AsyncReturnT', bound=Awaitable)
137
159
  _g_executor = None
138
160
  def get_global_executor():
139
161
  global _g_executor
@@ -157,7 +179,7 @@ def concurrent_wrap(executor=None):
157
179
  def sync_fn(*args, **kwargs):
158
180
  loop = asyncio.new_event_loop()
159
181
  return loop.run_until_complete(func(*args, **kwargs))
160
- return sync_fn
182
+ return sync_fn # type: ignore
161
183
  return _concurrent_wrap
162
184
 
163
185
  # https://stackoverflow.com/a/279586/6775765
lfss/svc/app_base.py CHANGED
@@ -27,7 +27,7 @@ req_conn = RequestDB()
27
27
  async def lifespan(app: FastAPI):
28
28
  global db
29
29
  try:
30
- await global_connection_init(n_read = 2)
30
+ await global_connection_init(n_read = 8 if not DEBUG_MODE else 1)
31
31
  await asyncio.gather(db.init(), req_conn.init())
32
32
  yield
33
33
  await req_conn.commit()
@@ -47,13 +47,14 @@ def handle_exception(fn):
47
47
  if isinstance(e, StorageExceededError): raise HTTPException(status_code=413, detail=str(e))
48
48
  if isinstance(e, PermissionError): raise HTTPException(status_code=403, detail=str(e))
49
49
  if isinstance(e, InvalidPathError): raise HTTPException(status_code=400, detail=str(e))
50
+ if isinstance(e, InvalidOptionsError): raise HTTPException(status_code=400, detail=str(e))
51
+ if isinstance(e, InvalidDataError): raise HTTPException(status_code=400, detail=str(e))
50
52
  if isinstance(e, FileNotFoundError): raise HTTPException(status_code=404, detail=str(e))
51
53
  if isinstance(e, FileDuplicateError): raise HTTPException(status_code=409, detail=str(e))
52
54
  if isinstance(e, FileExistsError): raise HTTPException(status_code=409, detail=str(e))
53
55
  if isinstance(e, TooManyItemsError): raise HTTPException(status_code=400, detail=str(e))
54
56
  if isinstance(e, DatabaseLockedError): raise HTTPException(status_code=503, detail=str(e))
55
57
  if isinstance(e, FileLockedError): raise HTTPException(status_code=423, detail=str(e))
56
- if isinstance(e, InvalidOptionsError): raise HTTPException(status_code=400, detail=str(e))
57
58
  logger.error(f"Uncaptured error in {fn.__name__}: {e}")
58
59
  raise
59
60
  return wrapper
lfss/svc/app_dav.py CHANGED
@@ -9,11 +9,11 @@ import xml.etree.ElementTree as ET
9
9
  from ..eng.connection_pool import unique_cursor
10
10
  from ..eng.error import *
11
11
  from ..eng.config import DATA_HOME, DEBUG_MODE
12
- from ..eng.datatype import UserRecord, FileRecord, DirectoryRecord
13
- from ..eng.database import FileConn
12
+ from ..eng.datatype import UserRecord, FileRecord, DirectoryRecord, AccessLevel
13
+ from ..eng.database import FileConn, UserConn, check_path_permission
14
14
  from ..eng.utils import ensure_uri_compnents, decode_uri_compnents, format_last_modified, static_vars
15
15
  from .app_base import *
16
- from .common_impl import get_file_impl, put_file_impl, delete_impl, copy_impl
16
+ from .common_impl import copy_impl
17
17
 
18
18
  LOCK_DB_PATH = DATA_HOME / "lock.db"
19
19
  MKDIR_PLACEHOLDER = ".lfss_keep"
@@ -53,11 +53,27 @@ async def eval_path(path: str) -> tuple[ptype, str, Optional[FileRecord | Direct
53
53
  # path now is url-safe and without leading slash
54
54
  if path.endswith("/"):
55
55
  lfss_path = path
56
- async with unique_cursor() as c:
57
- fconn = FileConn(c)
58
- if await fconn.count_path_files(path, flat=True) == 0:
59
- return None, lfss_path, None
60
- return "dir", lfss_path, await fconn.get_path_record(path)
56
+ dir_path_sp = path.split("/")
57
+ if len(dir_path_sp) > 2:
58
+ async with unique_cursor() as c:
59
+ fconn = FileConn(c)
60
+ if await fconn.count_path_files(path, flat=True) == 0:
61
+ return None, lfss_path, None
62
+ return "dir", lfss_path, await fconn.get_path_record(path)
63
+ else:
64
+ # test if its a user's root directory
65
+ assert len(dir_path_sp) == 2
66
+ username = path.split("/")[0]
67
+ async with unique_cursor() as c:
68
+ uconn = UserConn(c)
69
+ u = await uconn.get_user(username)
70
+ if u is None:
71
+ return None, lfss_path, None
72
+ return "dir", lfss_path, DirectoryRecord(lfss_path)
73
+
74
+ # may be root directory
75
+ if path == "":
76
+ return "dir", "", DirectoryRecord("")
61
77
 
62
78
  # not end with /, check if it is a file
63
79
  async with unique_cursor() as c:
@@ -66,11 +82,10 @@ async def eval_path(path: str) -> tuple[ptype, str, Optional[FileRecord | Direct
66
82
  lfss_path = path
67
83
  return "file", lfss_path, res
68
84
 
69
- if path == "": return "dir", "", DirectoryRecord("")
70
85
  async with unique_cursor() as c:
86
+ lfss_path = path + "/"
71
87
  fconn = FileConn(c)
72
- if await fconn.count_path_files(path + "/") > 0:
73
- lfss_path = path + "/"
88
+ if await fconn.count_path_files(lfss_path) > 0:
74
89
  return "dir", lfss_path, await fconn.get_path_record(lfss_path)
75
90
 
76
91
  return None, path, None
@@ -110,7 +125,7 @@ async def unlock_path(user: UserRecord, p: str, token: str):
110
125
  raise FileLockedError(f"Failed to unlock file [{p}] with token {token}")
111
126
  await cur.execute("DELETE FROM locks WHERE path=?", (p,))
112
127
  await conn.commit()
113
- async def query_lock_el(p: str, top_el_name: str = f"{{{DAV_NS}}}lockinfo") -> Optional[ET.Element]:
128
+ async def query_lock_element(p: str, top_el_name: str = f"{{{DAV_NS}}}lockinfo") -> Optional[ET.Element]:
114
129
  async with aiosqlite.connect(LOCK_DB_PATH) as conn:
115
130
  await conn.execute("BEGIN EXCLUSIVE")
116
131
  await conn.execute(lock_table_create_sql)
@@ -145,15 +160,16 @@ async def create_file_xml_element(frecord: FileRecord) -> ET.Element:
145
160
  href.text = f"/{frecord.url}"
146
161
  propstat = ET.SubElement(file_el, f"{{{DAV_NS}}}propstat")
147
162
  prop = ET.SubElement(propstat, f"{{{DAV_NS}}}prop")
148
- ET.SubElement(prop, f"{{{DAV_NS}}}displayname").text = frecord.url.split("/")[-1]
163
+ ET.SubElement(prop, f"{{{DAV_NS}}}displayname").text = decode_uri_compnents(frecord.url.split("/")[-1])
149
164
  ET.SubElement(prop, f"{{{DAV_NS}}}resourcetype")
150
165
  ET.SubElement(prop, f"{{{DAV_NS}}}getcontentlength").text = str(frecord.file_size)
151
166
  ET.SubElement(prop, f"{{{DAV_NS}}}getlastmodified").text = format_last_modified(frecord.create_time)
152
167
  ET.SubElement(prop, f"{{{DAV_NS}}}getcontenttype").text = frecord.mime_type
153
- lock_el = await query_lock_el(frecord.url, top_el_name=f"{{{DAV_NS}}}activelock")
168
+ lock_el = await query_lock_element(frecord.url, top_el_name=f"{{{DAV_NS}}}activelock")
154
169
  if lock_el is not None:
155
170
  lock_discovery = ET.SubElement(prop, f"{{{DAV_NS}}}lockdiscovery")
156
171
  lock_discovery.append(lock_el)
172
+ ET.SubElement(propstat, f"{{{DAV_NS}}}status").text = "HTTP/1.1 200 OK"
157
173
  return file_el
158
174
 
159
175
  async def create_dir_xml_element(drecord: DirectoryRecord) -> ET.Element:
@@ -162,15 +178,16 @@ async def create_dir_xml_element(drecord: DirectoryRecord) -> ET.Element:
162
178
  href.text = f"/{drecord.url}"
163
179
  propstat = ET.SubElement(dir_el, f"{{{DAV_NS}}}propstat")
164
180
  prop = ET.SubElement(propstat, f"{{{DAV_NS}}}prop")
165
- ET.SubElement(prop, f"{{{DAV_NS}}}displayname").text = drecord.url.split("/")[-2]
181
+ ET.SubElement(prop, f"{{{DAV_NS}}}displayname").text = decode_uri_compnents(drecord.url.split("/")[-2])
166
182
  ET.SubElement(prop, f"{{{DAV_NS}}}resourcetype").append(ET.Element(f"{{{DAV_NS}}}collection"))
167
183
  if drecord.size >= 0:
168
184
  ET.SubElement(prop, f"{{{DAV_NS}}}getlastmodified").text = format_last_modified(drecord.create_time)
169
185
  ET.SubElement(prop, f"{{{DAV_NS}}}getcontentlength").text = str(drecord.size)
170
- lock_el = await query_lock_el(drecord.url, top_el_name=f"{{{DAV_NS}}}activelock")
186
+ lock_el = await query_lock_element(drecord.url, top_el_name=f"{{{DAV_NS}}}activelock")
171
187
  if lock_el is not None:
172
188
  lock_discovery = ET.SubElement(prop, f"{{{DAV_NS}}}lockdiscovery")
173
189
  lock_discovery.append(lock_el)
190
+ ET.SubElement(propstat, f"{{{DAV_NS}}}status").text = "HTTP/1.1 200 OK"
174
191
  return dir_el
175
192
 
176
193
  async def xml_request_body(request: Request) -> Optional[ET.Element]:
@@ -186,64 +203,59 @@ async def dav_options(request: Request, path: str):
186
203
  return Response(headers={
187
204
  "DAV": "1,2",
188
205
  "MS-Author-Via": "DAV",
189
- "Allow": "OPTIONS, GET, HEAD, POST, DELETE, TRACE, PROPFIND, PROPPATCH, COPY, MOVE, LOCK, UNLOCK",
206
+ "Allow": "OPTIONS, GET, HEAD, POST, DELETE, TRACE, PROPFIND, PROPPATCH, COPY, MOVE, LOCK, UNLOCK, MKCOL",
190
207
  "Content-Length": "0"
191
208
  })
192
209
 
193
- @router_dav.get("/{path:path}")
194
- @handle_exception
195
- async def dav_get(request: Request, path: str, user: UserRecord = Depends(get_current_user)):
196
- ptype, path, _ = await eval_path(path)
197
- if ptype is None: raise PathNotFoundError(path)
198
- # elif ptype == "dir": raise InvalidOptionsError("Directory should not be fetched")
199
- else: return await get_file_impl(request, user=user, path=path)
200
-
201
- @router_dav.head("/{path:path}")
202
- @handle_exception
203
- async def dav_head(request: Request, path: str, user: UserRecord = Depends(registered_user)):
204
- ptype, path, _ = await eval_path(path)
205
- # some clients may send HEAD request to check if the file exists
206
- if ptype is None: raise PathNotFoundError(path)
207
- elif ptype == "dir": return Response(status_code=200)
208
- else: return await get_file_impl(request, user=user, path=path, is_head=True)
209
-
210
- @router_dav.put("/{path:path}")
211
- @handle_exception
212
- async def dav_put(request: Request, path: str, user: UserRecord = Depends(registered_user)):
213
- _, path, _ = await eval_path(path)
214
- return await put_file_impl(request, user=user, path=path, conflict='overwrite')
215
-
216
- @router_dav.delete("/{path:path}")
217
- @handle_exception
218
- async def dav_delete(path: str, user: UserRecord = Depends(registered_user)):
219
- _, path, _ = await eval_path(path)
220
- return await delete_impl(user=user, path=path)
221
-
222
210
  @router_dav.api_route("/{path:path}", methods=["PROPFIND"])
223
211
  @handle_exception
224
- async def dav_propfind(request: Request, path: str, user: UserRecord = Depends(registered_user)):
212
+ async def dav_propfind(request: Request, path: str, user: UserRecord = Depends(registered_user), body: Optional[ET.Element] = Depends(xml_request_body)):
225
213
  if path.startswith("/"): path = path[1:]
226
214
  path = ensure_uri_compnents(path)
227
215
 
228
- depth = request.headers.get("Depth", "1")
216
+ if body and DEBUG_MODE:
217
+ print("Propfind-body:", ET.tostring(body, encoding="utf-8", method="xml"))
218
+
219
+ depth = request.headers.get("Depth", "0")
229
220
  # Generate XML response
230
221
  multistatus = ET.Element(f"{{{DAV_NS}}}multistatus")
231
222
  path_type, lfss_path, record = await eval_path(path)
232
- logger.info(f"PROPFIND {lfss_path} (depth: {depth})")
233
- return_status = 200
223
+ logger.info(f"PROPFIND {lfss_path} (depth: {depth}), type: {path_type}, record: {record}")
224
+
225
+ if lfss_path and await check_path_permission(lfss_path, user) < AccessLevel.READ:
226
+ raise PermissionDeniedError(lfss_path)
227
+
234
228
  if path_type == "dir" and depth == "0":
235
229
  # query the directory itself
236
- return_status = 200
237
230
  assert isinstance(record, DirectoryRecord)
238
231
  dir_el = await create_dir_xml_element(record)
239
232
  multistatus.append(dir_el)
240
233
 
234
+ elif path_type == "dir" and lfss_path == "":
235
+ # query root directory content
236
+ async def user_path_record(user_name: str, cur) -> DirectoryRecord:
237
+ try:
238
+ return await FileConn(cur).get_path_record(user_name + "/")
239
+ except PathNotFoundError:
240
+ return DirectoryRecord(user_name + "/", size=0, n_files=0, create_time="1970-01-01 00:00:00", update_time="1970-01-01 00:00:00", access_time="1970-01-01 00:00:00")
241
+
242
+ async with unique_cursor() as c:
243
+ uconn = UserConn(c)
244
+ if not user.is_admin:
245
+ for u in [user] + await uconn.list_peer_users(user.id, AccessLevel.READ):
246
+ dir_el = await create_dir_xml_element(await user_path_record(u.username, c))
247
+ multistatus.append(dir_el)
248
+ else:
249
+ async for u in uconn.all():
250
+ dir_el = await create_dir_xml_element(await user_path_record(u.username, c))
251
+ multistatus.append(dir_el)
252
+
241
253
  elif path_type == "dir":
242
- return_status = 207
254
+ # query directory content
243
255
  async with unique_cursor() as c:
244
256
  flist = await FileConn(c).list_path_files(lfss_path, flat = True if depth == "infinity" else False)
245
257
  for frecord in flist:
246
- if frecord.url.split("/")[-1] == MKDIR_PLACEHOLDER: continue
258
+ if frecord.url.endswith(f"/{MKDIR_PLACEHOLDER}"): continue
247
259
  file_el = await create_file_xml_element(frecord)
248
260
  multistatus.append(file_el)
249
261
 
@@ -254,6 +266,7 @@ async def dav_propfind(request: Request, path: str, user: UserRecord = Depends(r
254
266
  multistatus.append(dir_el)
255
267
 
256
268
  elif path_type == "file":
269
+ # query file
257
270
  assert isinstance(record, FileRecord)
258
271
  file_el = await create_file_xml_element(record)
259
272
  multistatus.append(file_el)
@@ -262,7 +275,7 @@ async def dav_propfind(request: Request, path: str, user: UserRecord = Depends(r
262
275
  raise PathNotFoundError(path)
263
276
 
264
277
  xml_response = ET.tostring(multistatus, encoding="utf-8", method="xml")
265
- return Response(content=xml_response, media_type="application/xml", status_code=return_status)
278
+ return Response(content=xml_response, media_type="application/xml", status_code=207)
266
279
 
267
280
  @router_dav.api_route("/{path:path}", methods=["MKCOL"])
268
281
  @handle_exception
@@ -289,7 +302,7 @@ async def dav_move(request: Request, path: str, user: UserRecord = Depends(regis
289
302
  ptype, lfss_path, _ = await eval_path(path)
290
303
  if ptype is None:
291
304
  raise PathNotFoundError(path)
292
- dptype, dlfss_path, ddav_path = await eval_path(destination)
305
+ dptype, dlfss_path, _ = await eval_path(destination)
293
306
  if dptype is not None:
294
307
  raise HTTPException(status_code=409, detail="Conflict")
295
308
 
@@ -339,13 +352,13 @@ async def dav_lock(request: Request, path: str, user: UserRecord = Depends(regis
339
352
  # lock_token = f"opaquelocktoken:{uuid.uuid4().hex}"
340
353
  lock_token = f"urn:uuid:{uuid.uuid4()}"
341
354
  logger.info(f"LOCK {path} (timeout: {timeout}), token: {lock_token}, depth: {lock_depth}")
342
- if DEBUG_MODE:
355
+ if DEBUG_MODE and body:
343
356
  print("Lock-body:", ET.tostring(body, encoding="utf-8", method="xml"))
344
357
  async with dav_lock.lock:
345
358
  await lock_path(user, path, lock_token, lock_depth, timeout=timeout)
346
359
  response_elem = ET.Element(f"{{{DAV_NS}}}prop")
347
360
  lockdiscovery = ET.SubElement(response_elem, f"{{{DAV_NS}}}lockdiscovery")
348
- activelock = await query_lock_el(path, top_el_name=f"{{{DAV_NS}}}activelock")
361
+ activelock = await query_lock_element(path, top_el_name=f"{{{DAV_NS}}}activelock")
349
362
  assert activelock is not None
350
363
  lockdiscovery.append(activelock)
351
364
  lock_response = ET.tostring(response_elem, encoding="utf-8", method="xml")
@@ -362,7 +375,7 @@ async def dav_unlock(request: Request, path: str, user: UserRecord = Depends(reg
362
375
  if lock_token.startswith("<") and lock_token.endswith(">"):
363
376
  lock_token = lock_token[1:-1]
364
377
  logger.info(f"UNLOCK {path}, token: {lock_token}")
365
- if DEBUG_MODE:
378
+ if DEBUG_MODE and body:
366
379
  print("Unlock-body:", ET.tostring(body, encoding="utf-8", method="xml"))
367
380
  _, path, _ = await eval_path(path)
368
381
  await unlock_path(user, path, lock_token)
lfss/svc/app_native.py CHANGED
@@ -1,19 +1,19 @@
1
1
  from typing import Optional, Literal
2
2
 
3
3
  from fastapi import Depends, Request, Response, UploadFile
4
+ from fastapi.responses import StreamingResponse
4
5
  from fastapi.exceptions import HTTPException
5
6
 
6
- from ..eng.config import MAX_BUNDLE_BYTES
7
7
  from ..eng.utils import ensure_uri_compnents
8
8
  from ..eng.connection_pool import unique_cursor
9
9
  from ..eng.database import check_file_read_permission, check_path_permission, UserConn, FileConn
10
10
  from ..eng.datatype import (
11
- FileReadPermission, FileRecord, UserRecord, AccessLevel,
11
+ FileReadPermission, UserRecord, AccessLevel,
12
12
  FileSortKey, DirSortKey
13
13
  )
14
14
 
15
15
  from .app_base import *
16
- from .common_impl import get_file_impl, put_file_impl, post_file_impl, delete_impl, copy_impl
16
+ from .common_impl import get_impl, put_file_impl, post_file_impl, delete_impl, copy_impl
17
17
 
18
18
  @router_fs.get("/{path:path}")
19
19
  @handle_exception
@@ -23,7 +23,7 @@ async def get_file(
23
23
  download: bool = False, thumb: bool = False,
24
24
  user: UserRecord = Depends(get_current_user)
25
25
  ):
26
- return await get_file_impl(
26
+ return await get_impl(
27
27
  request = request,
28
28
  user = user, path = path, download = download, thumb = thumb
29
29
  )
@@ -38,9 +38,7 @@ async def head_file(
38
38
  ):
39
39
  if path.startswith("_api/"):
40
40
  raise HTTPException(status_code=405, detail="HEAD not supported for API")
41
- if path.endswith("/"):
42
- raise HTTPException(status_code=405, detail="HEAD not supported for directory")
43
- return await get_file_impl(
41
+ return await get_impl(
44
42
  request = request,
45
43
  user = user, path = path, download = download, thumb = thumb, is_head = True
46
44
  )
@@ -50,7 +48,7 @@ async def head_file(
50
48
  async def put_file(
51
49
  request: Request,
52
50
  path: str,
53
- conflict: Literal["overwrite", "skip", "abort"] = "abort",
51
+ conflict: Literal["overwrite", "skip", "abort"] = "overwrite",
54
52
  permission: int = 0,
55
53
  user: UserRecord = Depends(registered_user)
56
54
  ):
@@ -64,7 +62,7 @@ async def put_file(
64
62
  async def post_file(
65
63
  path: str,
66
64
  file: UploadFile,
67
- conflict: Literal["overwrite", "skip", "abort"] = "abort",
65
+ conflict: Literal["overwrite", "skip", "abort"] = "overwrite",
68
66
  permission: int = 0,
69
67
  user: UserRecord = Depends(registered_user)
70
68
  ):
@@ -83,46 +81,23 @@ async def delete_file(path: str, user: UserRecord = Depends(registered_user)):
83
81
  async def bundle_files(path: str, user: UserRecord = Depends(registered_user)):
84
82
  logger.info(f"GET bundle({path}), user: {user.username}")
85
83
  path = ensure_uri_compnents(path)
86
- assert path.endswith("/") or path == ""
87
-
88
- if not path == "" and path[0] == "/": # adapt to both /path and path
84
+ if not path.endswith("/"):
85
+ raise HTTPException(status_code=400, detail="Path must end with /")
86
+ if path[0] == "/": # adapt to both /path and path
89
87
  path = path[1:]
88
+ if path == "":
89
+ raise HTTPException(status_code=400, detail="Cannot bundle root")
90
90
 
91
- # TODO: may check peer users here
92
- owner_records_cache: dict[int, UserRecord] = {} # cache owner records, ID -> UserRecord
93
- async def is_access_granted(file_record: FileRecord):
94
- owner_id = file_record.owner_id
95
- owner = owner_records_cache.get(owner_id, None)
96
- if owner is None:
97
- async with unique_cursor() as conn:
98
- uconn = UserConn(conn)
99
- owner = await uconn.get_user_by_id(owner_id, throw=True)
100
- owner_records_cache[owner_id] = owner
101
-
102
- allow_access, _ = check_file_read_permission(user, owner, file_record)
103
- return allow_access
104
-
105
- async with unique_cursor() as conn:
106
- fconn = FileConn(conn)
107
- files = await fconn.list_path_files(
108
- url = path, flat = True,
109
- limit=(await fconn.count_path_files(url = path, flat = True))
110
- )
111
- files = [f for f in files if await is_access_granted(f)]
112
- if len(files) == 0:
113
- raise HTTPException(status_code=404, detail="No files found")
114
-
115
- # return bundle of files
116
- total_size = sum([f.file_size for f in files])
117
- if total_size > MAX_BUNDLE_BYTES:
118
- raise HTTPException(status_code=400, detail="Too large to zip")
119
-
120
- file_paths = [f.url for f in files]
121
- zip_buffer = await db.zip_path(path, file_paths)
122
- return Response(
123
- content=zip_buffer.getvalue(), media_type="application/zip", headers={
124
- "Content-Disposition": f"attachment; filename=bundle.zip",
125
- "Content-Length": str(zip_buffer.getbuffer().nbytes)
91
+ async with unique_cursor() as cur:
92
+ dir_record = await FileConn(cur).get_path_record(path)
93
+
94
+ pathname = f"{path.split('/')[-2]}"
95
+ return StreamingResponse(
96
+ content = await db.zip_path_stream(path, op_user=user),
97
+ media_type = "application/zip",
98
+ headers = {
99
+ f"Content-Disposition": f"attachment; filename=bundle-{pathname}.zip",
100
+ "X-Content-Bytes": str(dir_record.size),
126
101
  }
127
102
  )
128
103
 
lfss/svc/common_impl.py CHANGED
@@ -7,7 +7,7 @@ from ..eng.datatype import UserRecord, FileRecord, PathContents, AccessLevel, Fi
7
7
  from ..eng.database import FileConn, UserConn, delayed_log_access, check_file_read_permission, check_path_permission
8
8
  from ..eng.thumb import get_thumb
9
9
  from ..eng.utils import format_last_modified, ensure_uri_compnents
10
- from ..eng.config import CHUNK_SIZE
10
+ from ..eng.config import CHUNK_SIZE, DEBUG_MODE
11
11
 
12
12
  from .app_base import skip_request_log, db, logger
13
13
 
@@ -60,10 +60,15 @@ async def emit_file(
60
60
  else:
61
61
  arng_e = range_end
62
62
 
63
- if arng_s >= file_record.file_size or arng_e >= file_record.file_size:
64
- raise HTTPException(status_code=416, detail="Range not satisfiable")
65
- if arng_s > arng_e:
66
- raise HTTPException(status_code=416, detail="Invalid range")
63
+ if file_record.file_size > 0:
64
+ if arng_s >= file_record.file_size or arng_e >= file_record.file_size:
65
+ if DEBUG_MODE: print(f"[Invalid range] Actual range: {arng_s}-{arng_e} (size: {file_record.file_size})")
66
+ raise HTTPException(status_code=416, detail="Range not satisfiable")
67
+ if arng_s > arng_e:
68
+ raise HTTPException(status_code=416, detail="Invalid range")
69
+ else:
70
+ if not (arng_s == 0 and arng_e == -1):
71
+ raise HTTPException(status_code=416, detail="Invalid range (file size is 0)")
67
72
 
68
73
  headers = {
69
74
  "Content-Disposition": f"{disposition}; filename={fname}",
@@ -87,7 +92,7 @@ async def emit_file(
87
92
  status_code=206 if range_start != -1 or range_end != -1 else 200
88
93
  )
89
94
 
90
- async def get_file_impl(
95
+ async def get_impl(
91
96
  request: Request,
92
97
  user: UserRecord,
93
98
  path: str,
@@ -96,30 +101,12 @@ async def get_file_impl(
96
101
  is_head = False,
97
102
  ):
98
103
  path = ensure_uri_compnents(path)
104
+ if path.startswith("/"): path = path[1:]
99
105
 
100
106
  # handle directory query
101
107
  if path == "": path = "/"
102
108
  if path.endswith("/"):
103
- # return file under the path as json
104
- async with unique_cursor() as cur:
105
- fconn = FileConn(cur)
106
- if user.id == 0:
107
- raise HTTPException(status_code=401, detail="Permission denied, credential required")
108
- if thumb:
109
- return await emit_thumbnail(path, download, create_time=None)
110
-
111
- if path == "/":
112
- peer_users = await UserConn(cur).list_peer_users(user.id, AccessLevel.READ)
113
- return PathContents(
114
- dirs = await fconn.list_root_dirs(user.username, *[x.username for x in peer_users], skim=True) \
115
- if not user.is_admin else await fconn.list_root_dirs(skim=True),
116
- files = []
117
- )
118
-
119
- if not await check_path_permission(path, user, cursor=cur) >= AccessLevel.READ:
120
- raise HTTPException(status_code=403, detail="Permission denied")
121
-
122
- return await fconn.list_path(path)
109
+ return await _get_dir_impl(user=user, path=path, download=download, thumb=thumb, is_head=is_head)
123
110
 
124
111
  # handle file query
125
112
  async with unique_cursor() as cur:
@@ -147,6 +134,9 @@ async def get_file_impl(
147
134
  else:
148
135
  range_start, range_end = -1, -1
149
136
 
137
+ if DEBUG_MODE:
138
+ print(f"Get range: {range_start}-{range_end}")
139
+
150
140
  if thumb:
151
141
  if (range_start != -1 or range_end != -1): logger.warning("Range request for thumbnail")
152
142
  return await emit_thumbnail(path, download, create_time=file_record.create_time, is_head=is_head)
@@ -156,11 +146,55 @@ async def get_file_impl(
156
146
  else:
157
147
  return await emit_file(file_record, None, "inline", is_head = is_head, range_start=range_start, range_end=range_end)
158
148
 
149
+ async def _get_dir_impl(
150
+ user: UserRecord,
151
+ path: str,
152
+ download: bool = False,
153
+ thumb: bool = False,
154
+ is_head = False,
155
+ ):
156
+ """ handle directory query, return file under the path as json """
157
+ assert path.endswith("/")
158
+ async with unique_cursor() as cur:
159
+ fconn = FileConn(cur)
160
+ if user.id == 0:
161
+ raise HTTPException(status_code=401, detail="Permission denied, credential required")
162
+ if thumb:
163
+ return await emit_thumbnail(path, download, create_time=None)
164
+
165
+ if path == "/":
166
+ if is_head: return Response(status_code=200)
167
+ peer_users = await UserConn(cur).list_peer_users(user.id, AccessLevel.READ)
168
+ return PathContents(
169
+ dirs = await fconn.list_root_dirs(user.username, *[x.username for x in peer_users], skim=True) \
170
+ if not user.is_admin else await fconn.list_root_dirs(skim=True),
171
+ files = []
172
+ )
173
+
174
+ if not await check_path_permission(path, user, cursor=cur) >= AccessLevel.READ:
175
+ raise HTTPException(status_code=403, detail="Permission denied")
176
+
177
+ path_sp = path.split("/")
178
+ if is_head:
179
+ if len(path_sp) == 2:
180
+ assert path_sp[1] == ""
181
+ if await UserConn(cur).get_user(path_sp[0]):
182
+ return Response(status_code=200)
183
+ else:
184
+ raise HTTPException(status_code=404, detail="User not found")
185
+ else:
186
+ if await FileConn(cur).count_path_files(path, flat=True) > 0:
187
+ return Response(status_code=200)
188
+ else:
189
+ raise HTTPException(status_code=404, detail="Path not found")
190
+
191
+ return await fconn.list_path(path)
192
+
159
193
  async def put_file_impl(
160
194
  request: Request,
161
195
  user: UserRecord,
162
196
  path: str,
163
- conflict: Literal["overwrite", "skip", "abort"] = "abort",
197
+ conflict: Literal["overwrite", "skip", "abort"] = "overwrite",
164
198
  permission: int = 0,
165
199
  ):
166
200
  path = ensure_uri_compnents(path)
@@ -187,7 +221,9 @@ async def put_file_impl(
187
221
  exists_flag = True
188
222
  if await check_path_permission(path, user) < AccessLevel.WRITE:
189
223
  raise HTTPException(status_code=403, detail="Permission denied, cannot overwrite other's file")
190
- await db.delete_file(path)
224
+ old_record = await db.delete_file(path)
225
+ if old_record and permission == FileReadPermission.UNSET.value:
226
+ permission = old_record.permission.value # inherit permission
191
227
 
192
228
  # check content-type
193
229
  content_type = request.headers.get("Content-Type", "application/octet-stream")
@@ -213,7 +249,7 @@ async def post_file_impl(
213
249
  path: str,
214
250
  user: UserRecord,
215
251
  file: UploadFile,
216
- conflict: Literal["overwrite", "skip", "abort"] = "abort",
252
+ conflict: Literal["overwrite", "skip", "abort"] = "overwrite",
217
253
  permission: int = 0,
218
254
  ):
219
255
  path = ensure_uri_compnents(path)
@@ -240,7 +276,9 @@ async def post_file_impl(
240
276
  exists_flag = True
241
277
  if await check_path_permission(path, user) < AccessLevel.WRITE:
242
278
  raise HTTPException(status_code=403, detail="Permission denied, cannot overwrite other's file")
243
- await db.delete_file(path)
279
+ old_record = await db.delete_file(path)
280
+ if old_record and permission == FileReadPermission.UNSET.value:
281
+ permission = old_record.permission.value # inherit permission
244
282
 
245
283
  async def blob_reader():
246
284
  nonlocal file
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: lfss
3
- Version: 0.9.2
3
+ Version: 0.9.5
4
4
  Summary: Lightweight file storage service
5
5
  Home-page: https://github.com/MenxLi/lfss
6
6
  Author: li_mengxun
@@ -17,11 +17,12 @@ Requires-Dist: mimesniff (==1.*)
17
17
  Requires-Dist: pillow
18
18
  Requires-Dist: python-multipart
19
19
  Requires-Dist: requests (==2.*)
20
+ Requires-Dist: stream-zip (==0.*)
20
21
  Requires-Dist: uvicorn (==0.*)
21
22
  Project-URL: Repository, https://github.com/MenxLi/lfss
22
23
  Description-Content-Type: text/markdown
23
24
 
24
- # Lightweight File Storage Service (LFSS)
25
+ # Lite File Storage Service (LFSS)
25
26
  [![PyPI](https://img.shields.io/pypi/v/lfss)](https://pypi.org/project/lfss/)
26
27
 
27
28
  My experiment on a lightweight and high-performance file/object storage service...
@@ -55,8 +56,8 @@ Or, you can start a web server at `/frontend` and open `index.html` in your brow
55
56
 
56
57
  The API usage is simple, just `GET`, `PUT`, `DELETE` to the `/<username>/file/url` path.
57
58
  The authentication can be acheived through one of the following methods:
58
- 1. `Authorization` header with the value `Bearer sha256(<username><password>)`.
59
- 2. `token` query parameter with the value `sha256(<username><password>)`.
59
+ 1. `Authorization` header with the value `Bearer sha256(<username>:<password>)`.
60
+ 2. `token` query parameter with the value `sha256(<username>:<password>)`.
60
61
  3. HTTP Basic Authentication with the username and password (If WebDAV is enabled).
61
62
 
62
63
  You can refer to `frontend` as an application example, `lfss/api/connector.py` for more APIs.
@@ -1,8 +1,9 @@
1
- Readme.md,sha256=JVe9T6N1Rz4hTiiCVoDYe2VB0dAi60VcBgb2twQdfZc,1834
2
- docs/Enviroment_variables.md,sha256=LUZF1o70emp-5UPsvXPjcxapP940OqEZzSyyUUT9bEQ,569
1
+ Readme.md,sha256=ST2E12DJVlKTReiJjRBc7KZyAr8KyqlcK2BoTc_Peaw,1829
2
+ docs/Changelog.md,sha256=QYej_hmGnv9t8wjFHXBvmrBOvY7aACZ82oa5SVkIyzM,882
3
+ docs/Enviroment_variables.md,sha256=xaL8qBwT8B2Qe11FaOU3xWrRCh1mJ1VyTFCeFbkd0rs,570
3
4
  docs/Known_issues.md,sha256=ZqETcWP8lzTOel9b2mxEgCnADFF8IxOrEtiVO1NoMAk,251
4
5
  docs/Permission.md,sha256=mvK8gVBBgoIFJqikcaReU_bUo-mTq_ECqJaDDJoQF7Q,3126
5
- docs/Webdav.md,sha256=9Q41ROEJodVVAnlo1Tf0jqsyrbuHhv_ElSsXbIPXYIg,1547
6
+ docs/Webdav.md,sha256=-Ja-BTWSY1BEMAyZycvEMNnkNTPZ49gSPzmf3Lbib70,1547
6
7
  frontend/api.js,sha256=GlQsNoZFEcy7QUUsLbXv7aP-KxRnIxM37FQHTaakGiQ,19387
7
8
  frontend/index.html,sha256=-k0bJ5FRqdl_H-O441D_H9E-iejgRCaL_z5UeYaS2qc,3384
8
9
  frontend/info.css,sha256=Ny0N3GywQ3a9q1_Qph_QFEKB4fEnTe_2DJ1Y5OsLLmQ,595
@@ -18,7 +19,7 @@ frontend/thumb.css,sha256=rNsx766amYS2DajSQNabhpQ92gdTpNoQKmV69OKvtpI,295
18
19
  frontend/thumb.js,sha256=46ViD2TlTTWy0fx6wjoAs_5CQ4ajYB90vVzM7UO2IHw,6182
19
20
  frontend/utils.js,sha256=IYUZl77ugiXKcLxSNOWC4NSS0CdD5yRgUsDb665j0xM,2556
20
21
  lfss/api/__init__.py,sha256=8IJqrpWK1doIyVVbntvVic82A57ncwl5b0BRHX4Ri6A,6660
21
- lfss/api/connector.py,sha256=hHSEEWecKQGZH6oxAmYoG3q7lFfacCbOKVZiUIXT2y8,11819
22
+ lfss/api/connector.py,sha256=0iopAvqUiUJDjbAtAjr9ynmURnmB-Ejg3INL-877s_E,12192
22
23
  lfss/cli/__init__.py,sha256=lPwPmqpa7EXQ4zlU7E7LOe6X2kw_xATGdwoHphUEirA,827
23
24
  lfss/cli/balance.py,sha256=fUbKKAUyaDn74f7mmxMfBL4Q4voyBLHu6Lg_g8GfMOQ,4121
24
25
  lfss/cli/cli.py,sha256=aYjB8d4k6JUd9efxZK-XOj-mlG4JeOr_0lnj2qqCiK0,8066
@@ -28,23 +29,23 @@ lfss/cli/user.py,sha256=1mTroQbaKxHjFCPHT67xwd08v-zxH0RZ_OnVc-4MzL0,5364
28
29
  lfss/cli/vacuum.py,sha256=GOG72d3NYe9bYCNc3y8JecEmM-DrKlGq3JQcisv_xBg,3702
29
30
  lfss/eng/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
30
31
  lfss/eng/bounded_pool.py,sha256=BI1dU-MBf82TMwJBYbjhEty7w1jIUKc5Bn9SnZ_-hoY,1288
31
- lfss/eng/config.py,sha256=DmnUYMeLOL-45OstysyMpSBPmLofgzvcSrsWjHvssYs,915
32
- lfss/eng/connection_pool.py,sha256=-tePasJxiZZ73ymgWf_kFnaKouc4Rrr4K6EXwjb7Mm4,6141
33
- lfss/eng/database.py,sha256=cfMq7Hgj8cHFtynDzpRiqb0XYNb6OKWMYc8PcWl8eVw,47285
32
+ lfss/eng/config.py,sha256=Vni6h52Ce0njVrHZLAWFL8g34YDBdlmGrmRhpxElxQ8,868
33
+ lfss/eng/connection_pool.py,sha256=4xOF1kXXGqCWeLX5ZVFALKjdY8N1VVAVSSTRfCzbj94,6141
34
+ lfss/eng/database.py,sha256=9KMV-VeD8Dgd6M9RfNfAKwF5gWm763eJpJ2g5zyn_uY,48691
34
35
  lfss/eng/datatype.py,sha256=27UB7-l9SICy5lAvKjdzpTL_GohZjzstQcr9PtAq7nM,2709
35
- lfss/eng/error.py,sha256=sDbXo2R3APJAV0KtoYGCHx2qVZso7svtDzq-WjnzhAw,595
36
+ lfss/eng/error.py,sha256=dAlQHXOnQcSkA2vTugJFSxcyDqoFlPucBoFpTZ7GI6w,654
36
37
  lfss/eng/log.py,sha256=u6WRZZsE7iOx6_CV2NHh1ugea26p408FI4WstZh896A,5139
37
- lfss/eng/thumb.py,sha256=YO1yTI8WzW7pBpQN9x5PtPayxhftb32IJl1zPSS9mks,3243
38
- lfss/eng/utils.py,sha256=zZ7r9BsNV8XJJVNOxfIqRCO1bxNzh7bc9vEJiCkgbKI,6208
38
+ lfss/eng/thumb.py,sha256=x9jIHHU1tskmp-TavPPcxGpbmEjCp9gbH6ZlsEfqUxY,3383
39
+ lfss/eng/utils.py,sha256=WYoXFFi5308UWtFC8VP792gpzrVbHZZHhP3PaFjxIEY,6770
39
40
  lfss/sql/init.sql,sha256=8LjHx0TBCkBD62xFfssSeHDqKYVQQJkZAg4rSm046f4,1496
40
41
  lfss/sql/pragma.sql,sha256=uENx7xXjARmro-A3XAK8OM8v5AxDMdCCRj47f86UuXg,206
41
42
  lfss/svc/app.py,sha256=ftWCpepBx-gTSG7i-TB-IdinPPstAYYQjCgnTfeMZeI,219
42
- lfss/svc/app_base.py,sha256=nc02DP4iMKP41fRl8M-iAhbHwyb4QJJTKKSJwtdCox4,6617
43
- lfss/svc/app_dav.py,sha256=nPMdPsYNcgxqHOt5bDaaA0Wy8AdRDJajEda_-KxOoHA,17466
44
- lfss/svc/app_native.py,sha256=xwMCOWp4ne3rmtiiYhfxETi__V-zPEfHw-c4iWNtXWc,9471
45
- lfss/svc/common_impl.py,sha256=_biK0F_AAw4PnMNWR0WuHJSRyIp1iTSOOIPBauZCJ9M,12143
43
+ lfss/svc/app_base.py,sha256=PgL5MNU3QTwgIJP8CflDi9YBZ-uo4hVf74ADyWTo9yg,6742
44
+ lfss/svc/app_dav.py,sha256=D0KSgjtTktPjIhyIKG5eRmBdh5X8HYFYH151E6gzlbc,18245
45
+ lfss/svc/app_native.py,sha256=N2cidZ5sIS0p9HOkPqWvpGkqe4bIA7CgGEp4CsvgZ6Q,8347
46
+ lfss/svc/common_impl.py,sha256=0fjbqHWgqDhLfBEu6aC0Z5qgNt67C7z0Qroj7aV3Iq4,13830
46
47
  lfss/svc/request_log.py,sha256=v8yXEIzPjaksu76Oh5vgdbUEUrw8Kt4etLAXBWSGie8,3207
47
- lfss-0.9.2.dist-info/METADATA,sha256=0Q5klZ2iwBF1ZUQ5iximW02mMmoAM5ib08s0IsdyuLE,2594
48
- lfss-0.9.2.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
49
- lfss-0.9.2.dist-info/entry_points.txt,sha256=VJ8svMz7RLtMCgNk99CElx7zo7M-N-z7BWDVw2HA92E,205
50
- lfss-0.9.2.dist-info/RECORD,,
48
+ lfss-0.9.5.dist-info/METADATA,sha256=fToo-wrY0_XJPbk3cte8hb_4DClyPFv3ZTzJB77G-5w,2623
49
+ lfss-0.9.5.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
50
+ lfss-0.9.5.dist-info/entry_points.txt,sha256=VJ8svMz7RLtMCgNk99CElx7zo7M-N-z7BWDVw2HA92E,205
51
+ lfss-0.9.5.dist-info/RECORD,,
File without changes