miniflux_sanity 0.2.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.editorconfig +4 -0
- data/.gitignore +2 -0
- data/.ruby-version +1 -0
- data/CODE_OF_CONDUCT.md +74 -0
- data/Gemfile +17 -0
- data/Gemfile.lock +75 -0
- data/LICENSE +661 -0
- data/README.md +64 -0
- data/Rakefile +1 -0
- data/assets/miniflux-sanity_after.png +0 -0
- data/assets/miniflux-sanity_before.png +0 -0
- data/assets/miniflux-sanity_cli.png +0 -0
- data/bin/miniflux_sanity +53 -0
- data/lib/config.rb +23 -0
- data/lib/hash.rb +8 -0
- data/lib/miniflux_api.rb +67 -0
- data/lib/miniflux_sanity.rb +101 -0
- data/lib/utils/cache.rb +67 -0
- data/miniflux_sanity.gemspec +24 -0
- metadata +65 -0
data/README.md
ADDED
@@ -0,0 +1,64 @@
|
|
1
|
+
# miniflux-sanity
|
2
|
+
|
3
|
+
A Ruby command-line utility to mark older entries as read on your Miniflux app. Defaults to items older than 30 days. Switch to ~1 day to wake up to a fresh feed every day.
|
4
|
+
|
5
|
+
![A screenshot from my Terminal showcasing the utility in action](./assets/miniflux-sanity_cli.png)
|
6
|
+
|
7
|
+
__Before:__
|
8
|
+
|
9
|
+
![A screenshot from my Miniflux app showing 5031 unread items](./assets/miniflux-sanity_before.png)
|
10
|
+
|
11
|
+
__After:__
|
12
|
+
|
13
|
+
![A screenshot from my Miniflux app showing 516 unread items](./assets/miniflux-sanity_after.png)
|
14
|
+
|
15
|
+
## Motivation
|
16
|
+
|
17
|
+
If I haven't read something in the preceding month, it's unlikely I ever will. Miniflux doesn't offer an _archive_ option so we mark entries as read instead. All it really does is offer me a saner overview of "unread" items at the top.
|
18
|
+
|
19
|
+
As is usually the case for me, I wanted to build something meaningful as I pick up Ruby again. This was a small use-case that was a good first challenge to tackle.
|
20
|
+
|
21
|
+
The code is admittedly not perfect. I welcome any constructive criticism or feedback, more so if you are a Ruby enthusiast.
|
22
|
+
|
23
|
+
## Feature-set
|
24
|
+
|
25
|
+
- Uses token authentication
|
26
|
+
- Supports cloud and self-hosted Miniflux apps
|
27
|
+
- Configure number of days before which to mark items as read
|
28
|
+
- Resumes marking as read if interrupted
|
29
|
+
|
30
|
+
## To-do
|
31
|
+
|
32
|
+
- [ ] Publish as a RubyGem for easier usage (see the [feature/rubygem](https://github.com/hirusi/miniflux-sanity/tree/feature/rubygem) branch)
|
33
|
+
- [ ] Unit testing
|
34
|
+
- [ ] Resume fetching if command crashes in between
|
35
|
+
- [ ] If an item is starred _and_ unread, don't mark it as read.
|
36
|
+
- This could lend itself to a nice workflow where my "to-read" can be starred while scanning through items.
|
37
|
+
|
38
|
+
## Goals
|
39
|
+
|
40
|
+
- Get comfortable with Ruby's syntax
|
41
|
+
- Work with `Class`, `Module`, `dotenv` etc.
|
42
|
+
- Work with `JSON`
|
43
|
+
- Work with Ruby's `File` API
|
44
|
+
- Interact with an API using an HTTP library
|
45
|
+
|
46
|
+
## Usage
|
47
|
+
|
48
|
+
You must have Ruby available on your system/shell.
|
49
|
+
|
50
|
+
Install by running `gem install miniflux_sanity`. All command line options can be viewed by running `miniflux_sanity --help`.
|
51
|
+
|
52
|
+
## Development
|
53
|
+
|
54
|
+
The Ruby version is specified in `.ruby-version`. `rbenv` is able to read and set the correct local version in your shell.
|
55
|
+
|
56
|
+
- `git clone git@github.com:hirusi/miniflux-sanity.git`
|
57
|
+
- `cd miniflux-sanity`
|
58
|
+
- `cp .env.template .env`
|
59
|
+
- Update the `.env` file as required.
|
60
|
+
- You'll need a token from your Miniflux app under `Settings > API Keys -> Create a new API key`
|
61
|
+
- Install the dependencies: `bundle`
|
62
|
+
- Run the utility: `bundle exec ruby main.rb`
|
63
|
+
|
64
|
+
If you have a Docker setup to contribute using Alpine OS as its base, I'd be very happy to merge your PR.
|
data/Rakefile
ADDED
@@ -0,0 +1 @@
|
|
1
|
+
require "bundler/gem_tasks"
|
Binary file
|
Binary file
|
Binary file
|
data/bin/miniflux_sanity
ADDED
@@ -0,0 +1,53 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
require 'miniflux_sanity'
|
4
|
+
require 'rationalist'
|
5
|
+
|
6
|
+
argv = Rationalist.parse(ARGV, **{
|
7
|
+
:string => [
|
8
|
+
'host',
|
9
|
+
'token'
|
10
|
+
],
|
11
|
+
:default => {
|
12
|
+
:host => 'https://reader.miniflux.app/',
|
13
|
+
:days => 30
|
14
|
+
}
|
15
|
+
})
|
16
|
+
|
17
|
+
if argv[:help]
|
18
|
+
puts "miniflux_sanity is a command line utility to mark items older than specified time as read in Miniflux."
|
19
|
+
puts ""
|
20
|
+
puts "You may pass in the following arguments:"
|
21
|
+
puts ""
|
22
|
+
puts "--version show currently installed version"
|
23
|
+
puts ""
|
24
|
+
puts "--help show this help message"
|
25
|
+
puts ""
|
26
|
+
puts "--host your miniflux host (optional)
|
27
|
+
default: https://reader.miniflux.app/"
|
28
|
+
puts ""
|
29
|
+
puts "--token your miniflux API token (required)
|
30
|
+
generate from Settings > API Keys > Create a new API key"
|
31
|
+
puts ""
|
32
|
+
puts "--days number of days before which to mark items as read (optional)
|
33
|
+
default: 30
|
34
|
+
example: 7"
|
35
|
+
exit true
|
36
|
+
end
|
37
|
+
|
38
|
+
if argv[:version]
|
39
|
+
puts "miniflux_sanity v0.2.0"
|
40
|
+
exit true
|
41
|
+
end
|
42
|
+
|
43
|
+
if argv[:token].nil?
|
44
|
+
puts "You must at least specify the API token!"
|
45
|
+
puts ""
|
46
|
+
puts "--token (required) your miniflux API token
|
47
|
+
generate from Settings > API Keys > Create a new API key"
|
48
|
+
exit
|
49
|
+
end
|
50
|
+
|
51
|
+
miniflux_sanity = MinifluxSanity.new token: argv[:token], host: argv[:host], days: argv[:days]
|
52
|
+
miniflux_sanity.fetch_entries
|
53
|
+
miniflux_sanity.mark_entries_as_read
|
data/lib/config.rb
ADDED
@@ -0,0 +1,23 @@
|
|
1
|
+
class Config
|
2
|
+
attr_reader :cutoff_date, :cutoff_timestamp, :auth
|
3
|
+
|
4
|
+
def initialize(host:, token:, days:)
|
5
|
+
require 'date'
|
6
|
+
@cutoff_date = Date.today - days.to_i
|
7
|
+
@cutoff_timestamp = @cutoff_date.to_time.to_i
|
8
|
+
@auth = {
|
9
|
+
:host => host,
|
10
|
+
:token => token
|
11
|
+
}
|
12
|
+
end
|
13
|
+
|
14
|
+
def load_env
|
15
|
+
require 'dotenv'
|
16
|
+
begin
|
17
|
+
Dotenv.load
|
18
|
+
rescue
|
19
|
+
p 'Could not load environment variables.'
|
20
|
+
exit(false)
|
21
|
+
end
|
22
|
+
end
|
23
|
+
end
|
data/lib/hash.rb
ADDED
@@ -0,0 +1,8 @@
|
|
1
|
+
class ::Hash
|
2
|
+
# https://stackoverflow.com/a/25990044/2464435
|
3
|
+
def deep_merge(second)
|
4
|
+
|
5
|
+
merger = proc { |_, v1, v2| Hash === v1 && Hash === v2 ? v1.merge(v2, &merger) : Array === v1 && Array === v2 ? v1 | v2 : [:undefined, nil, :nil].include?(v2) ? v1 : v2 }
|
6
|
+
merge(second.to_h, &merger)
|
7
|
+
end
|
8
|
+
end
|
data/lib/miniflux_api.rb
ADDED
@@ -0,0 +1,67 @@
|
|
1
|
+
require "httparty"
|
2
|
+
require "date"
|
3
|
+
require_relative "hash"
|
4
|
+
|
5
|
+
class MinifluxApi
|
6
|
+
include HTTParty
|
7
|
+
maintain_method_across_redirects true
|
8
|
+
|
9
|
+
def initialize(host:, token:)
|
10
|
+
self.class.base_uri "#{host}/v1/"
|
11
|
+
|
12
|
+
@options = {
|
13
|
+
:headers => {
|
14
|
+
"X-Auth-Token": token,
|
15
|
+
"Accept": "application/json"
|
16
|
+
}
|
17
|
+
}
|
18
|
+
end
|
19
|
+
|
20
|
+
def get_entries(before:, limit: 100, offset:, status: 'unread', direction: 'asc')
|
21
|
+
begin
|
22
|
+
custom_options = @options.deep_merge({
|
23
|
+
:query => {
|
24
|
+
:status => status,
|
25
|
+
:direction => direction,
|
26
|
+
:before => before,
|
27
|
+
:offset => offset,
|
28
|
+
:limit => limit
|
29
|
+
}
|
30
|
+
})
|
31
|
+
response = self.class.get("/entries", custom_options)
|
32
|
+
response_code = response.code.to_i
|
33
|
+
|
34
|
+
if response_code >= 400
|
35
|
+
raise response.parsed_response
|
36
|
+
else
|
37
|
+
response.parsed_response["entries"]
|
38
|
+
end
|
39
|
+
rescue => error
|
40
|
+
puts "Could not get entries from your Miniflux server. More details to follow.", error
|
41
|
+
exit
|
42
|
+
end
|
43
|
+
end
|
44
|
+
|
45
|
+
# Pass in an array of IDs
|
46
|
+
def mark_entries_read(ids:)
|
47
|
+
new_options = @options.deep_merge({
|
48
|
+
:headers => {
|
49
|
+
"Content-Type": "application/json"
|
50
|
+
},
|
51
|
+
:body => {
|
52
|
+
:entry_ids => ids,
|
53
|
+
:status => "read"
|
54
|
+
}.to_json
|
55
|
+
})
|
56
|
+
|
57
|
+
response = self.class.put("/entries", new_options)
|
58
|
+
|
59
|
+
if response.code.to_i == 204
|
60
|
+
puts "Marked entries with ID #{ids.join ", "} as read."
|
61
|
+
else
|
62
|
+
puts "Could not mark entries with ID #{ids.join ", "} as read"
|
63
|
+
exit(false)
|
64
|
+
end
|
65
|
+
|
66
|
+
end
|
67
|
+
end
|
@@ -0,0 +1,101 @@
|
|
1
|
+
require "date"
|
2
|
+
require_relative "config"
|
3
|
+
require_relative "miniflux_api"
|
4
|
+
require_relative "utils/cache"
|
5
|
+
|
6
|
+
class MinifluxSanity
|
7
|
+
def initialize(token:, host:, days:)
|
8
|
+
# Configuration object
|
9
|
+
# TODO Is there a way to pass this cleanly? We're passing everything we receive, with the exact same argument names as well
|
10
|
+
@@config = Config.new token: token, host: host, days: days
|
11
|
+
|
12
|
+
# Set up miniflux and cache clients
|
13
|
+
@@miniflux_client = MinifluxApi.new host: host, token: @@config.auth[:token]
|
14
|
+
@@cache_client = Cache.new path: "cache.json"
|
15
|
+
end
|
16
|
+
|
17
|
+
def last_fetched_today?
|
18
|
+
if @@cache_client.last_fetched.nil?
|
19
|
+
false
|
20
|
+
else
|
21
|
+
Date.parse(@@cache_client.last_fetched) == Date.today
|
22
|
+
end
|
23
|
+
end
|
24
|
+
|
25
|
+
def is_older_than_cutoff?(published_at:)
|
26
|
+
if Date.parse(published_at).to_time.to_i > @@config.cutoff_timestamp
|
27
|
+
false
|
28
|
+
else
|
29
|
+
true
|
30
|
+
end
|
31
|
+
end
|
32
|
+
|
33
|
+
def fetch_entries
|
34
|
+
if self.last_fetched_today?
|
35
|
+
puts "Last run was today, skipping fetch."
|
36
|
+
else
|
37
|
+
puts "Now collecting all unread entries before the specified date."
|
38
|
+
end
|
39
|
+
|
40
|
+
# We get these in blocks of 250
|
41
|
+
# When we hit <250, we stop because that is the last call to make!
|
42
|
+
size = 0
|
43
|
+
limit = 250
|
44
|
+
count = limit
|
45
|
+
until count < limit or self.last_fetched_today? do
|
46
|
+
|
47
|
+
entries = @@miniflux_client.get_entries before: @@config.cutoff_timestamp, offset: size, limit: limit
|
48
|
+
|
49
|
+
if entries.length < 1
|
50
|
+
puts "No more new entries"
|
51
|
+
exit true
|
52
|
+
end
|
53
|
+
|
54
|
+
entries.filter do |entry|
|
55
|
+
# Just for some extra resilience, we make sure to check the published_at date before we filter it. This would be helpful where the Miniflux API itself has a bug with its before filter, for example.
|
56
|
+
unless is_older_than_cutoff? published_at: entry["published_at"]
|
57
|
+
true
|
58
|
+
else
|
59
|
+
false
|
60
|
+
end
|
61
|
+
end
|
62
|
+
|
63
|
+
count = entries.count
|
64
|
+
size = size + count
|
65
|
+
puts "Fetched #{size} entries."
|
66
|
+
|
67
|
+
@@cache_client.last_fetched = Date.today.to_s
|
68
|
+
@@cache_client.size = size
|
69
|
+
@@cache_client.add_entries_to_file data: entries
|
70
|
+
|
71
|
+
unless count < limit
|
72
|
+
puts "Fetching more..."
|
73
|
+
end
|
74
|
+
end
|
75
|
+
end
|
76
|
+
|
77
|
+
def mark_entries_as_read
|
78
|
+
start = 0
|
79
|
+
interval = 10
|
80
|
+
cached_data = @@cache_client.read_from_file
|
81
|
+
|
82
|
+
while @@cache_client.size != 0 do
|
83
|
+
stop = start + interval
|
84
|
+
|
85
|
+
# For every 10 entries, mark as read.
|
86
|
+
# Reduce size and remove entries accordingly in our file.
|
87
|
+
filtered_data = cached_data["data"][start...stop]
|
88
|
+
|
89
|
+
ids_to_mark_read = filtered_data.map { |entry| entry["id"] }
|
90
|
+
|
91
|
+
@@miniflux_client.mark_entries_read ids: ids_to_mark_read
|
92
|
+
|
93
|
+
@@cache_client.size -= interval
|
94
|
+
@@cache_client.remove_entries_from_file ids: ids_to_mark_read
|
95
|
+
|
96
|
+
start += interval
|
97
|
+
|
98
|
+
puts "#{@@cache_client.size} entries left to be mark as read."
|
99
|
+
end
|
100
|
+
end
|
101
|
+
end
|
data/lib/utils/cache.rb
ADDED
@@ -0,0 +1,67 @@
|
|
1
|
+
require "json"
|
2
|
+
class Cache
|
3
|
+
attr_accessor :size, :last_fetched
|
4
|
+
|
5
|
+
def initialize(path:)
|
6
|
+
@config = {
|
7
|
+
:path => path
|
8
|
+
}
|
9
|
+
|
10
|
+
@size = File.readable?(@config[:path]) ? JSON.parse(File.read(@config[:path]).to_s)["data"].count : 0
|
11
|
+
|
12
|
+
@last_fetched = File.readable?(@config[:path]) ? JSON.parse(File.read(@config[:path]).to_s)["last_fetched"] : nil
|
13
|
+
end
|
14
|
+
|
15
|
+
def read_from_file
|
16
|
+
if File.readable? @config[:path]
|
17
|
+
JSON.parse(File.read(@config[:path]).to_s)
|
18
|
+
else
|
19
|
+
return {
|
20
|
+
"size" => 0,
|
21
|
+
"last_fetched" => nil,
|
22
|
+
"data" => Array.new
|
23
|
+
}
|
24
|
+
end
|
25
|
+
end
|
26
|
+
|
27
|
+
def remove_entries_from_file(ids:)
|
28
|
+
cache = self.read_from_file
|
29
|
+
|
30
|
+
ids.each do |id|
|
31
|
+
# TODO We could implement a search algo like binary search for performance here
|
32
|
+
cache["data"].each do |cached_entry|
|
33
|
+
if cached_entry["id"].to_i == id.to_i
|
34
|
+
cache["data"].delete cached_entry
|
35
|
+
end
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
self.size = cache["data"].count
|
40
|
+
self.write_to_file data: cache
|
41
|
+
end
|
42
|
+
|
43
|
+
def add_entries_to_file(data:)
|
44
|
+
cache = self.read_from_file
|
45
|
+
new_entries_count = 0
|
46
|
+
|
47
|
+
# If an entry doesn't exist in the cache, we add it.
|
48
|
+
data.each do |new_entry|
|
49
|
+
unless cache["data"].find_index { |cache_entry| cache_entry["id"] == new_entry["id"] }
|
50
|
+
cache["data"].push (new_entry.filter { |key| key != "content" })
|
51
|
+
new_entries_count += 1
|
52
|
+
end
|
53
|
+
end
|
54
|
+
|
55
|
+
self.size = cache["data"].count
|
56
|
+
self.last_fetched = Date.today
|
57
|
+
self.write_to_file data: cache
|
58
|
+
|
59
|
+
puts "#{new_entries_count} new entries were written to cache."
|
60
|
+
end
|
61
|
+
|
62
|
+
def write_to_file(data:)
|
63
|
+
data["size"] = @size
|
64
|
+
data["last_fetched"] = @last_fetched
|
65
|
+
File.write(@config[:path], data.to_json)
|
66
|
+
end
|
67
|
+
end
|
@@ -0,0 +1,24 @@
|
|
1
|
+
Gem::Specification.new do |s|
|
2
|
+
s.name = 'miniflux_sanity'
|
3
|
+
s.version = '0.2.0'
|
4
|
+
s.required_ruby_version = Gem::Requirement.new(">= 2.7.1")
|
5
|
+
s.date = '2020-09-11'
|
6
|
+
s.summary = "Mark items older than specified time as read in Miniflux."
|
7
|
+
s.description = "Command line utility to mark items older than specified time as read in Miniflux."
|
8
|
+
s.authors = ["hirusi"]
|
9
|
+
s.email = 'hello@rusingh.com'
|
10
|
+
s.license = 'AGPL-3.0'
|
11
|
+
|
12
|
+
s.homepage = 'https://github.com/hirusi/miniflux-sanity'
|
13
|
+
s.metadata["homepage_uri"] = s.homepage
|
14
|
+
s.metadata["source_code_uri"] = s.homepage
|
15
|
+
|
16
|
+
# Specify which files should be added to the gem when it is released.
|
17
|
+
# The `git ls-files -z` loads the files in the RubyGem that have been added into git.
|
18
|
+
s.files = Dir.chdir(File.expand_path('..', __FILE__)) do
|
19
|
+
`git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
|
20
|
+
end
|
21
|
+
s.bindir = "bin"
|
22
|
+
s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
|
23
|
+
s.require_paths = ["lib"]
|
24
|
+
end
|
metadata
ADDED
@@ -0,0 +1,65 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: miniflux_sanity
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.2.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- hirusi
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2020-09-11 00:00:00.000000000 Z
|
12
|
+
dependencies: []
|
13
|
+
description: Command line utility to mark items older than specified time as read
|
14
|
+
in Miniflux.
|
15
|
+
email: hello@rusingh.com
|
16
|
+
executables:
|
17
|
+
- miniflux_sanity
|
18
|
+
extensions: []
|
19
|
+
extra_rdoc_files: []
|
20
|
+
files:
|
21
|
+
- ".editorconfig"
|
22
|
+
- ".gitignore"
|
23
|
+
- ".ruby-version"
|
24
|
+
- CODE_OF_CONDUCT.md
|
25
|
+
- Gemfile
|
26
|
+
- Gemfile.lock
|
27
|
+
- LICENSE
|
28
|
+
- README.md
|
29
|
+
- Rakefile
|
30
|
+
- assets/miniflux-sanity_after.png
|
31
|
+
- assets/miniflux-sanity_before.png
|
32
|
+
- assets/miniflux-sanity_cli.png
|
33
|
+
- bin/miniflux_sanity
|
34
|
+
- lib/config.rb
|
35
|
+
- lib/hash.rb
|
36
|
+
- lib/miniflux_api.rb
|
37
|
+
- lib/miniflux_sanity.rb
|
38
|
+
- lib/utils/cache.rb
|
39
|
+
- miniflux_sanity.gemspec
|
40
|
+
homepage: https://github.com/hirusi/miniflux-sanity
|
41
|
+
licenses:
|
42
|
+
- AGPL-3.0
|
43
|
+
metadata:
|
44
|
+
homepage_uri: https://github.com/hirusi/miniflux-sanity
|
45
|
+
source_code_uri: https://github.com/hirusi/miniflux-sanity
|
46
|
+
post_install_message:
|
47
|
+
rdoc_options: []
|
48
|
+
require_paths:
|
49
|
+
- lib
|
50
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - ">="
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: 2.7.1
|
55
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
56
|
+
requirements:
|
57
|
+
- - ">="
|
58
|
+
- !ruby/object:Gem::Version
|
59
|
+
version: '0'
|
60
|
+
requirements: []
|
61
|
+
rubygems_version: 3.1.2
|
62
|
+
signing_key:
|
63
|
+
specification_version: 4
|
64
|
+
summary: Mark items older than specified time as read in Miniflux.
|
65
|
+
test_files: []
|