lfss 0.9.0__py3-none-any.whl → 0.9.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Readme.md CHANGED
@@ -9,6 +9,7 @@ My experiment on a lightweight and high-performance file/object storage service.
9
9
  - Pagination and sorted file listing for vast number of files.
10
10
  - High performance: high concurrency, near-native speed on stress tests.
11
11
  - Support range requests, so you can stream large files / resume download.
12
+ - WebDAV compatible ([NOTE](./docs/Webdav.md)).
12
13
 
13
14
  It stores small files and metadata in sqlite, large files in the filesystem.
14
15
  Tested on 2 million files, and it is still fast.
@@ -30,9 +31,15 @@ lfss-panel --open
30
31
  Or, you can start a web server at `/frontend` and open `index.html` in your browser.
31
32
 
32
33
  The API usage is simple, just `GET`, `PUT`, `DELETE` to the `/<username>/file/url` path.
33
- Authentication via `Authorization` header with the value `Bearer <token>`, or through the `token` query parameter.
34
+ The authentication can be acheived through one of the following methods:
35
+ 1. `Authorization` header with the value `Bearer sha256(<username><password>)`.
36
+ 2. `token` query parameter with the value `sha256(<username><password>)`.
37
+ 3. HTTP Basic Authentication with the username and password (If WebDAV is enabled).
38
+
34
39
  You can refer to `frontend` as an application example, `lfss/api/connector.py` for more APIs.
35
40
 
36
41
  By default, the service exposes all files to the public for `GET` requests,
37
42
  but file-listing is restricted to the user's own files.
38
- Please refer to [docs/Permission.md](./docs/Permission.md) for more details on the permission system.
43
+ Please refer to [docs/Permission.md](./docs/Permission.md) for more details on the permission system.
44
+
45
+ More can be found in the [docs](./docs) directory.
@@ -0,0 +1,12 @@
1
+
2
+ # Enviroment variables
3
+
4
+ **Server**
5
+ - `LFSS_DATA`: The directory to store the data. Default is `.storage_data`.
6
+ - `LFSS_WEBDAV`: Enable WebDAV support. Default is `0`, set to `1` to enable.
7
+ - `LFSS_LARGE_FILE`: The size limit of the file to store in the database. Default is `8m`.
8
+ - `LFSS_DEBUG`: Enable debug mode for more verbose logging. Default is `0`, set to `1` to enable.
9
+
10
+ **Client**
11
+ - `LFSS_ENDPOINT`: The fallback server endpoint. Default is `http://localhost:8000`.
12
+ - `LFSS_TOKEN`: The fallback token to authenticate. Should be `sha256(<username><password>)`.
docs/Known_issues.md CHANGED
@@ -1 +1,3 @@
1
- [Safari 中文输入法回车捕获](https://github.com/anse-app/anse/issues/127)
1
+ [Safari 中文输入法回车捕获](https://github.com/anse-app/anse/issues/127)
2
+
3
+ [Word 临时文件](https://answers.microsoft.com/en-us/msoffice/forum/all/mac-os-word-temp-sb-folders-created-on-smb-share/40fda56c-c77c-4365-8fa3-eb87ac814207?page=1)
docs/Webdav.md ADDED
@@ -0,0 +1,22 @@
1
+ # WebDAV
2
+
3
+ It is convinient to make LFSS WebDAV compatible, because they both use HTTP `GET`, `PUT`, `DELETE` methods to interact with files.
4
+
5
+ However, WebDAV utilize more HTTP methods,
6
+ which are disabled by default in LFSS, because they may not be supported by many middlewares or clients.
7
+
8
+ The WebDAV support can be enabled by setting the `LFSS_WEBDAV` environment variable to `1`.
9
+ i.e.
10
+ ```sh
11
+ LFSS_WEBDAV=1 lfss-serve
12
+ ```
13
+ Please note:
14
+ 1. **WebDAV support is experimental, and is currently not well-tested.**
15
+ 2. LFSS not allow creating files in the root directory, however some client such as [Finder](https://sabre.io/dav/clients/finder/) will try to create files in the root directory. Thus, it is safer to mount the user directory only, e.g. `http://localhost:8000/<username>/`.
16
+ 3. LFSS not allow directory creation, instead it creates directoy implicitly when a file is uploaded to a non-exist directory.
17
+ i.e. `PUT http://localhost:8000/<username>/dir/file.txt` will create the `dir` directory if it does not exist.
18
+ However, the WebDAV `MKCOL` method requires the directory to be created explicitly, so WebDAV `MKCOL` method instead create a decoy file on the path (`.lfss-keep`), and hide the file from the file listing by `PROPFIND` method.
19
+ This leads to:
20
+ 1) You may see a `.lfss-keep` file in the directory with native file listing (e.g. `/_api/list-files`), but it is hidden in WebDAV clients.
21
+ 2) The directory may be deleted if there is no file in it and the `.lfss-keep` file is not created by WebDAV client.
22
+
frontend/api.js CHANGED
@@ -384,6 +384,27 @@ export default class Connector {
384
384
  }
385
385
  }
386
386
 
387
+ /**
388
+ * @param {string} srcPath - file path(url)
389
+ * @param {string} dstPath - new file path(url)
390
+ */
391
+ async copy(srcPath, dstPath){
392
+ if (srcPath.startsWith('/')){ srcPath = srcPath.slice(1); }
393
+ if (dstPath.startsWith('/')){ dstPath = dstPath.slice(1); }
394
+ const dst = new URL(this.config.endpoint + '/_api/copy');
395
+ dst.searchParams.append('src', srcPath);
396
+ dst.searchParams.append('dst', dstPath);
397
+ const res = await fetch(dst.toString(), {
398
+ method: 'POST',
399
+ headers: {
400
+ 'Authorization': 'Bearer ' + this.config.token,
401
+ 'Content-Type': 'application/www-form-urlencoded'
402
+ },
403
+ });
404
+ if (!(res.status == 200 || res.status == 201)){
405
+ throw new Error(`Failed to copy file, status code: ${res.status}, message: ${await fmtFailedResponse(res)}`);
406
+ }
407
+ }
387
408
  }
388
409
 
389
410
  /**
frontend/scripts.js CHANGED
@@ -347,7 +347,7 @@ async function refreshFileList(){
347
347
  }
348
348
  );
349
349
  }, {
350
- text: 'Enter the destination path: ',
350
+ text: 'Enter the destination path (Move): ',
351
351
  placeholder: 'Destination path',
352
352
  value: decodePathURI(dirurl),
353
353
  select: "last-pathname"
@@ -355,6 +355,30 @@ async function refreshFileList(){
355
355
  });
356
356
  actContainer.appendChild(moveButton);
357
357
 
358
+ const copyButton = document.createElement('a');
359
+ copyButton.textContent = 'Copy';
360
+ copyButton.style.cursor = 'pointer';
361
+ copyButton.addEventListener('click', () => {
362
+ showFloatingWindowLineInput((dstPath) => {
363
+ dstPath = encodePathURI(dstPath);
364
+ console.log("Copying", dirurl, "to", dstPath);
365
+ conn.copy(dirurl, dstPath)
366
+ .then(() => {
367
+ refreshFileList();
368
+ },
369
+ (err) => {
370
+ showPopup('Failed to copy path: ' + err, {level: 'error'});
371
+ }
372
+ );
373
+ }, {
374
+ text: 'Enter the destination path (Copy): ',
375
+ placeholder: 'Destination path',
376
+ value: decodePathURI(dirurl),
377
+ select: "last-pathname"
378
+ });
379
+ });
380
+ actContainer.appendChild(copyButton);
381
+
358
382
  const downloadButton = document.createElement('a');
359
383
  downloadButton.textContent = 'Download';
360
384
  downloadButton.href = conn.config.endpoint + '/_api/bundle?' +
@@ -478,7 +502,7 @@ async function refreshFileList(){
478
502
  }
479
503
  );
480
504
  }, {
481
- text: 'Enter the destination path: ',
505
+ text: 'Enter the destination path (Move): ',
482
506
  placeholder: 'Destination path',
483
507
  value: decodePathURI(file.url),
484
508
  select: "last-filename"
@@ -486,6 +510,29 @@ async function refreshFileList(){
486
510
  });
487
511
  actContainer.appendChild(moveButton);
488
512
 
513
+ const copyButton = document.createElement('a');
514
+ copyButton.textContent = 'Copy';
515
+ copyButton.style.cursor = 'pointer';
516
+ copyButton.addEventListener('click', () => {
517
+ showFloatingWindowLineInput((dstPath) => {
518
+ dstPath = encodePathURI(dstPath);
519
+ conn.copy(file.url, dstPath)
520
+ .then(() => {
521
+ refreshFileList();
522
+ },
523
+ (err) => {
524
+ showPopup('Failed to copy file: ' + err, {level: 'error'});
525
+ }
526
+ );
527
+ }, {
528
+ text: 'Enter the destination path (Copy): ',
529
+ placeholder: 'Destination path',
530
+ value: decodePathURI(file.url),
531
+ select: "last-filename"
532
+ });
533
+ });
534
+ actContainer.appendChild(copyButton);
535
+
489
536
  const downloadBtn = document.createElement('a');
490
537
  downloadBtn.textContent = 'Download';
491
538
  downloadBtn.href = conn.config.endpoint + '/' + file.url + '?download=true&token=' + conn.config.token;
lfss/api/__init__.py CHANGED
@@ -1,9 +1,9 @@
1
1
  import os, time, pathlib
2
2
  from threading import Lock
3
3
  from .connector import Connector
4
- from ..src.datatype import FileRecord
5
- from ..src.utils import decode_uri_compnents
6
- from ..src.bounded_pool import BoundedThreadPoolExecutor
4
+ from ..eng.datatype import FileRecord
5
+ from ..eng.utils import decode_uri_compnents
6
+ from ..eng.bounded_pool import BoundedThreadPoolExecutor
7
7
 
8
8
  def upload_file(
9
9
  connector: Connector,
lfss/api/connector.py CHANGED
@@ -5,12 +5,12 @@ import requests
5
5
  import requests.adapters
6
6
  import urllib.parse
7
7
  from tempfile import SpooledTemporaryFile
8
- from lfss.src.error import PathNotFoundError
9
- from lfss.src.datatype import (
8
+ from lfss.eng.error import PathNotFoundError
9
+ from lfss.eng.datatype import (
10
10
  FileReadPermission, FileRecord, DirectoryRecord, UserRecord, PathContents,
11
11
  FileSortKey, DirSortKey
12
12
  )
13
- from lfss.src.utils import ensure_uri_compnents
13
+ from lfss.eng.utils import ensure_uri_compnents
14
14
 
15
15
  _default_endpoint = os.environ.get('LFSS_ENDPOINT', 'http://localhost:8000')
16
16
  _default_token = os.environ.get('LFSS_TOKEN', '')
@@ -270,6 +270,12 @@ class Connector:
270
270
  self._fetch_factory('POST', '_api/meta', {'path': path, 'new_path': new_path})(
271
271
  headers = {'Content-Type': 'application/www-form-urlencoded'}
272
272
  )
273
+
274
+ def copy(self, src: str, dst: str):
275
+ """Copy file from src to dst."""
276
+ self._fetch_factory('POST', '_api/copy', {'src': src, 'dst': dst})(
277
+ headers = {'Content-Type': 'application/www-form-urlencoded'}
278
+ )
273
279
 
274
280
  def whoami(self) -> UserRecord:
275
281
  """Gets information about the current user."""
lfss/cli/balance.py CHANGED
@@ -2,14 +2,14 @@
2
2
  Balance the storage by ensuring that large file thresholds are met.
3
3
  """
4
4
 
5
- from lfss.src.config import LARGE_BLOB_DIR, LARGE_FILE_BYTES
5
+ from lfss.eng.config import LARGE_BLOB_DIR, LARGE_FILE_BYTES
6
6
  import argparse, time, itertools
7
7
  from functools import wraps
8
8
  from asyncio import Semaphore
9
9
  import aiofiles, asyncio
10
10
  import aiofiles.os
11
- from lfss.src.database import transaction, unique_cursor
12
- from lfss.src.connection_pool import global_entrance
11
+ from lfss.eng.database import transaction, unique_cursor
12
+ from lfss.eng.connection_pool import global_entrance
13
13
 
14
14
  sem: Semaphore
15
15
 
lfss/cli/cli.py CHANGED
@@ -1,8 +1,8 @@
1
1
  from pathlib import Path
2
2
  import argparse, typing
3
3
  from lfss.api import Connector, upload_directory, upload_file, download_file, download_directory
4
- from lfss.src.datatype import FileReadPermission, FileSortKey, DirSortKey
5
- from lfss.src.utils import decode_uri_compnents
4
+ from lfss.eng.datatype import FileReadPermission, FileSortKey, DirSortKey
5
+ from lfss.eng.utils import decode_uri_compnents
6
6
  from . import catch_request_error, line_sep
7
7
 
8
8
  def parse_permission(s: str) -> FileReadPermission:
lfss/cli/panel.py CHANGED
@@ -2,6 +2,7 @@
2
2
  import uvicorn
3
3
  from fastapi import FastAPI
4
4
  from fastapi.staticfiles import StaticFiles
5
+ from fastapi.middleware.cors import CORSMiddleware
5
6
 
6
7
  import argparse
7
8
  from contextlib import asynccontextmanager
@@ -27,6 +28,13 @@ assert (__frontend_dir / "index.html").exists(), "Frontend panel not found"
27
28
 
28
29
  app = FastAPI(lifespan=app_lifespan)
29
30
  app.mount("/", StaticFiles(directory=__frontend_dir, html=True), name="static")
31
+ app.add_middleware(
32
+ CORSMiddleware,
33
+ allow_origins=["*"],
34
+ allow_credentials=True,
35
+ allow_methods=["*"],
36
+ allow_headers=["*"],
37
+ )
30
38
 
31
39
  def main():
32
40
  parser = argparse.ArgumentParser(description="Serve frontend panel")
lfss/cli/serve.py CHANGED
@@ -1,7 +1,9 @@
1
1
  import argparse
2
2
  from uvicorn import Config, Server
3
3
  from uvicorn.config import LOGGING_CONFIG
4
- from ..src.server import *
4
+ from ..eng.config import DEBUG_MODE
5
+ from ..svc.app_base import logger
6
+ from ..svc.app import app
5
7
 
6
8
  def main():
7
9
  parser = argparse.ArgumentParser()
@@ -19,7 +21,7 @@ def main():
19
21
  app=app,
20
22
  host=args.host,
21
23
  port=args.port,
22
- access_log=False,
24
+ access_log=True if DEBUG_MODE else False,
23
25
  workers=args.workers,
24
26
  log_config=default_logging_config
25
27
  )
lfss/cli/user.py CHANGED
@@ -1,10 +1,10 @@
1
1
  import argparse, asyncio, os
2
2
  from contextlib import asynccontextmanager
3
3
  from .cli import parse_permission, FileReadPermission
4
- from ..src.utils import parse_storage_size, fmt_storage_size
5
- from ..src.datatype import AccessLevel
6
- from ..src.database import Database, FileReadPermission, transaction, UserConn, unique_cursor, FileConn
7
- from ..src.connection_pool import global_entrance
4
+ from ..eng.utils import parse_storage_size, fmt_storage_size
5
+ from ..eng.datatype import AccessLevel
6
+ from ..eng.database import Database, FileReadPermission, transaction, UserConn, unique_cursor, FileConn
7
+ from ..eng.connection_pool import global_entrance
8
8
 
9
9
  def parse_access_level(s: str) -> AccessLevel:
10
10
  for p in AccessLevel:
lfss/cli/vacuum.py CHANGED
@@ -2,17 +2,17 @@
2
2
  Vacuum the database and external storage to ensure that the storage is consistent and minimal.
3
3
  """
4
4
 
5
- from lfss.src.config import LARGE_BLOB_DIR
5
+ from lfss.eng.config import LARGE_BLOB_DIR
6
6
  import argparse, time
7
7
  from functools import wraps
8
8
  from asyncio import Semaphore
9
9
  import aiofiles, asyncio
10
10
  import aiofiles.os
11
11
  from contextlib import contextmanager
12
- from lfss.src.database import transaction, unique_cursor
13
- from lfss.src.stat import RequestDB
14
- from lfss.src.utils import now_stamp
15
- from lfss.src.connection_pool import global_entrance
12
+ from lfss.eng.database import transaction, unique_cursor
13
+ from lfss.svc.request_log import RequestDB
14
+ from lfss.eng.utils import now_stamp
15
+ from lfss.eng.connection_pool import global_entrance
16
16
 
17
17
  sem: Semaphore
18
18
 
@@ -21,6 +21,7 @@ else:
21
21
  MAX_MEM_FILE_BYTES = 128 * 1024 * 1024 # 128MB
22
22
  MAX_BUNDLE_BYTES = 512 * 1024 * 1024 # 512MB
23
23
  CHUNK_SIZE = 1024 * 1024 # 1MB chunks for streaming (on large files)
24
+ DEBUG_MODE = os.environ.get('LFSS_DEBUG', '0') == '1'
24
25
 
25
26
  THUMB_DB = DATA_HOME / 'thumbs.db'
26
27
  THUMB_SIZE = (48, 48)
@@ -19,7 +19,7 @@ from .datatype import (
19
19
  )
20
20
  from .config import LARGE_BLOB_DIR, CHUNK_SIZE, LARGE_FILE_BYTES, MAX_MEM_FILE_BYTES
21
21
  from .log import get_logger
22
- from .utils import decode_uri_compnents, hash_credential, concurrent_wrap, debounce_async
22
+ from .utils import decode_uri_compnents, hash_credential, concurrent_wrap, debounce_async, copy_file
23
23
  from .error import *
24
24
 
25
25
  class DBObjectBase(ABC):
@@ -405,18 +405,30 @@ class FileConn(DBObjectBase):
405
405
  )
406
406
  await self._user_size_inc(owner_id, file_size)
407
407
  self.logger.info(f"File {url} created")
408
-
409
- async def move_file(self, old_url: str, new_url: str):
408
+
409
+ # not tested
410
+ async def copy_file(self, old_url: str, new_url: str, user_id: Optional[int] = None):
410
411
  old = await self.get_file_record(old_url)
411
412
  if old is None:
412
413
  raise FileNotFoundError(f"File {old_url} not found")
413
414
  new_exists = await self.get_file_record(new_url)
414
415
  if new_exists is not None:
415
416
  raise FileExistsError(f"File {new_url} already exists")
416
- await self.cur.execute("UPDATE fmeta SET url = ?, create_time = CURRENT_TIMESTAMP WHERE url = ?", (new_url, old_url))
417
- self.logger.info(f"Moved file {old_url} to {new_url}")
417
+ new_fid = str(uuid.uuid4())
418
+ user_id = old.owner_id if user_id is None else user_id
419
+ await self.cur.execute(
420
+ "INSERT INTO fmeta (url, owner_id, file_id, file_size, permission, external, mime_type) VALUES (?, ?, ?, ?, ?, ?, ?)",
421
+ (new_url, user_id, new_fid, old.file_size, old.permission, old.external, old.mime_type)
422
+ )
423
+ if not old.external:
424
+ await self.set_file_blob(new_fid, await self.get_file_blob(old.file_id))
425
+ else:
426
+ await copy_file(LARGE_BLOB_DIR / old.file_id, LARGE_BLOB_DIR / new_fid)
427
+ await self._user_size_inc(user_id, old.file_size)
428
+ self.logger.info(f"Copied file {old_url} to {new_url}")
418
429
 
419
- async def move_path(self, old_url: str, new_url: str, conflict_handler: Literal['skip', 'overwrite'] = 'overwrite', user_id: Optional[int] = None):
430
+ # not tested
431
+ async def copy_path(self, old_url: str, new_url: str, conflict_handler: Literal['skip', 'overwrite'] = 'overwrite', user_id: Optional[int] = None):
420
432
  assert old_url.endswith('/'), "Old path must end with /"
421
433
  assert new_url.endswith('/'), "New path must end with /"
422
434
  if user_id is None:
@@ -426,12 +438,49 @@ class FileConn(DBObjectBase):
426
438
  cursor = await self.cur.execute("SELECT * FROM fmeta WHERE url LIKE ? AND owner_id = ?", (old_url + '%', user_id))
427
439
  res = await cursor.fetchall()
428
440
  for r in res:
429
- new_r = new_url + r[0][len(old_url):]
441
+ old_record = FileRecord(*r)
442
+ new_r = new_url + old_record.url[len(old_url):]
430
443
  if conflict_handler == 'overwrite':
431
444
  await self.cur.execute("DELETE FROM fmeta WHERE url = ?", (new_r, ))
432
445
  elif conflict_handler == 'skip':
433
446
  if (await self.cur.execute("SELECT url FROM fmeta WHERE url = ?", (new_r, ))) is not None:
434
447
  continue
448
+ new_fid = str(uuid.uuid4())
449
+ user_id = old_record.owner_id if user_id is None else user_id
450
+ await self.cur.execute(
451
+ "INSERT INTO fmeta (url, owner_id, file_id, file_size, permission, external, mime_type) VALUES (?, ?, ?, ?, ?, ?, ?)",
452
+ (new_r, user_id, new_fid, old_record.file_size, old_record.permission, old_record.external, old_record.mime_type)
453
+ )
454
+ if not old_record.external:
455
+ await self.set_file_blob(new_fid, await self.get_file_blob(old_record.file_id))
456
+ else:
457
+ await copy_file(LARGE_BLOB_DIR / old_record.file_id, LARGE_BLOB_DIR / new_fid)
458
+ await self._user_size_inc(user_id, old_record.file_size)
459
+
460
+ async def move_file(self, old_url: str, new_url: str):
461
+ old = await self.get_file_record(old_url)
462
+ if old is None:
463
+ raise FileNotFoundError(f"File {old_url} not found")
464
+ new_exists = await self.get_file_record(new_url)
465
+ if new_exists is not None:
466
+ raise FileExistsError(f"File {new_url} already exists")
467
+ await self.cur.execute("UPDATE fmeta SET url = ?, create_time = CURRENT_TIMESTAMP WHERE url = ?", (new_url, old_url))
468
+ self.logger.info(f"Moved file {old_url} to {new_url}")
469
+
470
+ async def move_path(self, old_url: str, new_url: str, user_id: Optional[int] = None):
471
+ assert old_url.endswith('/'), "Old path must end with /"
472
+ assert new_url.endswith('/'), "New path must end with /"
473
+ if user_id is None:
474
+ cursor = await self.cur.execute("SELECT * FROM fmeta WHERE url LIKE ?", (old_url + '%', ))
475
+ res = await cursor.fetchall()
476
+ else:
477
+ cursor = await self.cur.execute("SELECT * FROM fmeta WHERE url LIKE ? AND owner_id = ?", (old_url + '%', user_id))
478
+ res = await cursor.fetchall()
479
+ for r in res:
480
+ new_r = new_url + r[0][len(old_url):]
481
+ if await (await self.cur.execute("SELECT url FROM fmeta WHERE url = ?", (new_r, ))).fetchone():
482
+ self.logger.error(f"File {new_r} already exists on move path: {old_url} -> {new_url}")
483
+ raise FileDuplicateError(f"File {new_r} already exists")
435
484
  await self.cur.execute("UPDATE fmeta SET url = ?, create_time = CURRENT_TIMESTAMP WHERE url = ?", (new_r, r[0]))
436
485
 
437
486
  async def log_access(self, url: str):
@@ -633,6 +682,9 @@ class Database:
633
682
  async with unique_cursor() as cur:
634
683
  user = await get_user(cur, u)
635
684
  assert user is not None, f"User {u} not found"
685
+
686
+ if await check_path_permission(url, user, cursor=cur) < AccessLevel.WRITE:
687
+ raise PermissionDeniedError(f"Permission denied: {user.username} cannot write to {url}")
636
688
 
637
689
  fconn_r = FileConn(cur)
638
690
  user_size_used = await fconn_r.user_size(user.id)
@@ -734,14 +786,33 @@ class Database:
734
786
  if r is None:
735
787
  raise FileNotFoundError(f"File {old_url} not found")
736
788
  if op_user is not None:
737
- if await check_path_permission(old_url, op_user) < AccessLevel.WRITE:
789
+ if await check_path_permission(old_url, op_user, cursor=cur) < AccessLevel.WRITE:
738
790
  raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot move file {old_url}")
791
+ if await check_path_permission(new_url, op_user, cursor=cur) < AccessLevel.WRITE:
792
+ raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot move file to {new_url}")
739
793
  await fconn.move_file(old_url, new_url)
740
794
 
741
795
  new_mime, _ = mimetypes.guess_type(new_url)
742
796
  if not new_mime is None:
743
797
  await fconn.update_file_record(new_url, mime_type=new_mime)
744
798
 
799
+ # not tested
800
+ async def copy_file(self, old_url: str, new_url: str, op_user: Optional[UserRecord] = None):
801
+ validate_url(old_url)
802
+ validate_url(new_url)
803
+
804
+ async with transaction() as cur:
805
+ fconn = FileConn(cur)
806
+ r = await fconn.get_file_record(old_url)
807
+ if r is None:
808
+ raise FileNotFoundError(f"File {old_url} not found")
809
+ if op_user is not None:
810
+ if await check_path_permission(old_url, op_user, cursor=cur) < AccessLevel.READ:
811
+ raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot copy file {old_url}")
812
+ if await check_path_permission(new_url, op_user, cursor=cur) < AccessLevel.WRITE:
813
+ raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot copy file to {new_url}")
814
+ await fconn.copy_file(old_url, new_url, user_id=op_user.id if op_user is not None else None)
815
+
745
816
  async def move_path(self, old_url: str, new_url: str, op_user: UserRecord):
746
817
  validate_url(old_url, is_file=False)
747
818
  validate_url(new_url, is_file=False)
@@ -756,14 +827,38 @@ class Database:
756
827
 
757
828
  async with unique_cursor() as cur:
758
829
  if not (
759
- await check_path_permission(old_url, op_user) >= AccessLevel.WRITE and
760
- await check_path_permission(new_url, op_user) >= AccessLevel.WRITE
830
+ await check_path_permission(old_url, op_user, cursor=cur) >= AccessLevel.WRITE and
831
+ await check_path_permission(new_url, op_user, cursor=cur) >= AccessLevel.WRITE
761
832
  ):
762
833
  raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot move path {old_url} to {new_url}")
763
834
 
764
835
  async with transaction() as cur:
765
836
  fconn = FileConn(cur)
766
- await fconn.move_path(old_url, new_url, 'overwrite', op_user.id)
837
+ await fconn.move_path(old_url, new_url, op_user.id)
838
+
839
+ # not tested
840
+ async def copy_path(self, old_url: str, new_url: str, op_user: UserRecord):
841
+ validate_url(old_url, is_file=False)
842
+ validate_url(new_url, is_file=False)
843
+
844
+ if new_url.startswith('/'):
845
+ new_url = new_url[1:]
846
+ if old_url.startswith('/'):
847
+ old_url = old_url[1:]
848
+ assert old_url != new_url, "Old and new path must be different"
849
+ assert old_url.endswith('/'), "Old path must end with /"
850
+ assert new_url.endswith('/'), "New path must end with /"
851
+
852
+ async with unique_cursor() as cur:
853
+ if not (
854
+ await check_path_permission(old_url, op_user, cursor=cur) >= AccessLevel.READ and
855
+ await check_path_permission(new_url, op_user, cursor=cur) >= AccessLevel.WRITE
856
+ ):
857
+ raise PermissionDeniedError(f"Permission denied: {op_user.username} cannot copy path {old_url} to {new_url}")
858
+
859
+ async with transaction() as cur:
860
+ fconn = FileConn(cur)
861
+ await fconn.copy_path(old_url, new_url, 'overwrite', op_user.id)
767
862
 
768
863
  async def __batch_delete_file_blobs(self, fconn: FileConn, file_records: list[FileRecord], batch_size: int = 512):
769
864
  # https://github.com/langchain-ai/langchain/issues/10321
@@ -2,10 +2,16 @@ import sqlite3
2
2
 
3
3
  class LFSSExceptionBase(Exception):...
4
4
 
5
+ class FileLockedError(LFSSExceptionBase):...
6
+
7
+ class InvalidOptionsError(LFSSExceptionBase, ValueError):...
8
+
5
9
  class DatabaseLockedError(LFSSExceptionBase, sqlite3.DatabaseError):...
6
10
 
7
11
  class PathNotFoundError(LFSSExceptionBase, FileNotFoundError):...
8
12
 
13
+ class FileDuplicateError(LFSSExceptionBase, FileExistsError):...
14
+
9
15
  class PermissionDeniedError(LFSSExceptionBase, PermissionError):...
10
16
 
11
17
  class InvalidPathError(LFSSExceptionBase, ValueError):...
@@ -1,6 +1,6 @@
1
- from lfss.src.config import THUMB_DB, THUMB_SIZE
2
- from lfss.src.database import FileConn
3
- from lfss.src.connection_pool import unique_cursor
1
+ from lfss.eng.config import THUMB_DB, THUMB_SIZE
2
+ from lfss.eng.database import FileConn
3
+ from lfss.eng.connection_pool import unique_cursor
4
4
  from typing import Optional
5
5
  from PIL import Image
6
6
  from io import BytesIO
@@ -69,12 +69,13 @@ async def get_thumb(path: str) -> Optional[tuple[bytes, str]]:
69
69
  async with unique_cursor() as main_c:
70
70
  fconn = FileConn(main_c)
71
71
  r = await fconn.get_file_record(path)
72
- if r is None:
73
- async with cache_cursor() as cur:
74
- await _delete_cache_thumb(cur, path)
75
- raise FileNotFoundError(f'File not found: {path}')
76
- if not r.mime_type.startswith('image/'):
77
- return None
72
+
73
+ if r is None:
74
+ async with cache_cursor() as cur:
75
+ await _delete_cache_thumb(cur, path)
76
+ raise FileNotFoundError(f'File not found: {path}')
77
+ if not r.mime_type.startswith('image/'):
78
+ return None
78
79
 
79
80
  async with cache_cursor() as cur:
80
81
  c_time = r.create_time
@@ -1,8 +1,10 @@
1
1
  import datetime, time
2
2
  import urllib.parse
3
- import asyncio
3
+ import pathlib
4
4
  import functools
5
5
  import hashlib
6
+ import aiofiles
7
+ import asyncio
6
8
  from asyncio import Lock
7
9
  from collections import OrderedDict
8
10
  from concurrent.futures import ThreadPoolExecutor
@@ -11,6 +13,12 @@ from functools import wraps, partial
11
13
  from uuid import uuid4
12
14
  import os
13
15
 
16
+ async def copy_file(source: str|pathlib.Path, destination: str|pathlib.Path):
17
+ async with aiofiles.open(source, mode='rb') as src:
18
+ async with aiofiles.open(destination, mode='wb') as dest:
19
+ while chunk := await src.read(1024):
20
+ await dest.write(chunk)
21
+
14
22
  def hash_credential(username: str, password: str):
15
23
  return hashlib.sha256((username + password).encode()).hexdigest()
16
24
 
lfss/svc/app.py ADDED
@@ -0,0 +1,9 @@
1
+ from .app_base import ENABLE_WEBDAV
2
+ from .app_native import *
3
+
4
+ # order matters
5
+ app.include_router(router_api)
6
+ if ENABLE_WEBDAV:
7
+ from .app_dav import *
8
+ app.include_router(router_dav)
9
+ app.include_router(router_fs)