flok 0.0.38 → 0.0.39
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/app/drivers/chrome/src/dispatch.js +41 -6
- data/app/drivers/chrome/src/persist.js +1 -10
- data/app/kern/dispatch.js +17 -23
- data/app/kern/gen_id.js +8 -0
- data/app/kern/macro.rb +20 -18
- data/app/kern/pagers/pg_spec0.js +20 -0
- data/app/kern/services/vm.rb +176 -30
- data/docs/client_api.md +3 -1
- data/docs/compilation.md +1 -1
- data/docs/dispatch.md +91 -0
- data/docs/kernel_api.md +3 -2
- data/docs/messaging.md +6 -1
- data/docs/mod/persist.md +4 -3
- data/docs/project_layout.md +2 -2
- data/docs/services/vm.md +116 -41
- data/docs/services/vm/pagers.md +38 -46
- data/lib/flok.rb +1 -0
- data/lib/flok/build.rb +3 -4
- data/lib/flok/macro.rb +27 -0
- data/lib/flok/services_compiler.rb +12 -8
- data/lib/flok/user_compiler.rb +131 -4
- data/lib/flok/version.rb +1 -1
- data/spec/env/kern.rb +71 -0
- data/spec/etc/macro_spec.rb +3 -8
- data/spec/etc/service_compiler/service3.rb +27 -0
- data/spec/etc/services_compiler_spec.rb +35 -27
- data/spec/iface/driver/dispatch_spec.rb +20 -0
- data/spec/iface/driver/persist_spec.rb +9 -24
- data/spec/iface/kern/ping_spec.rb +3 -24
- data/spec/kern/assets/vm/config4.rb +12 -0
- data/spec/kern/assets/vm/controller10.rb +26 -0
- data/spec/kern/assets/vm/controller11.rb +33 -0
- data/spec/kern/assets/vm/controller12.rb +45 -0
- data/spec/kern/assets/vm/controller13.rb +40 -0
- data/spec/kern/assets/vm/controller14.rb +14 -0
- data/spec/kern/assets/vm/controller15.rb +15 -0
- data/spec/kern/assets/vm/controller16.rb +29 -0
- data/spec/kern/assets/vm/controller17.rb +30 -0
- data/spec/kern/assets/vm/controller18.rb +28 -0
- data/spec/kern/assets/vm/controller19.rb +14 -0
- data/spec/kern/assets/vm/controller19b.rb +15 -0
- data/spec/kern/assets/vm/controller20.rb +19 -0
- data/spec/kern/assets/vm/controller21.rb +40 -0
- data/spec/kern/assets/vm/controller7.rb +18 -0
- data/spec/kern/assets/vm/controller8.rb +38 -0
- data/spec/kern/assets/vm/controller8b.rb +18 -0
- data/spec/kern/assets/vm/controller9.rb +20 -0
- data/spec/kern/assets/vm/controller_exc_2watch.rb +15 -0
- data/spec/kern/assets/vm/controller_exc_ewatch.rb +14 -0
- data/spec/kern/assets/vm/macros/copy_page_c.rb +23 -0
- data/spec/kern/assets/vm/macros/entry_del_c.rb +18 -0
- data/spec/kern/assets/vm/macros/entry_insert_c.rb +21 -0
- data/spec/kern/assets/vm/macros/entry_mutable_c.rb +33 -0
- data/spec/kern/assets/vm/macros/new_page_c.rb +7 -0
- data/spec/kern/assets/vm/macros/new_page_c2.rb +7 -0
- data/spec/kern/assets/vm/macros/set_page_head_c.rb +18 -0
- data/spec/kern/assets/vm/macros/set_page_next_c.rb +18 -0
- data/spec/kern/controller_macro_spec.rb +186 -0
- data/spec/kern/dispatch_spec.rb +125 -0
- data/spec/kern/functions_spec.rb +15 -0
- data/spec/kern/vm_service_spec.rb +874 -173
- metadata +70 -5
- data/docs/scheduling.md +0 -46
- data/spec/kern/rest_service_spec.rb +0 -45
data/docs/client_api.md
CHANGED
@@ -8,6 +8,8 @@ Client API covers controller action event handlers.
|
|
8
8
|
* Send(event_name, info) - Send a custom event on the main queue.
|
9
9
|
* Raise(event_name, info) - Will send an event to the parent view controller (and it will bubble up, following `event_gw` which is set in `Embed` as the parent controller
|
10
10
|
* Lower(spot_name, event_name, info) - Send an event to a particular spot
|
11
|
+
* Helpers
|
12
|
+
* Page Modification - See [User Page Modification Helpers](./vm.md#user_page_modification_helpers) for a list of functions available.
|
11
13
|
|
12
14
|
### Controller Event Handlers
|
13
15
|
* Variables
|
@@ -18,4 +20,4 @@ Client API covers controller action event handlers.
|
|
18
20
|
### Controller on_entry
|
19
21
|
* `context` - The information for the controllers context
|
20
22
|
* `__base__` - The address of the controller
|
21
|
-
* `__info__` - Holds the `context`, current action, etc. See [Datatypes](./datatypes.md)
|
23
|
+
* `__info__` - Holds the `context`, current action, etc. See [Datatypes](./datatypes.md)
|
data/docs/compilation.md
CHANGED
@@ -14,7 +14,7 @@ as necessary.*
|
|
14
14
|
2. All js files in `./app/kern/config/*.js` are globbed togeather and sent to `./products/$PLATFORM/glob/1kern_config.js`
|
15
15
|
3. All js files in `./app/kern/*.js` are globbed togeather and sent to `./products/$PLATFORM/glob/2kern.pre_macro.js`
|
16
16
|
4. All js files in `./app/kern/pagers/*.js` are globbed togeather and sent to `./products/$PLATFORM/glob/3kern.pre_macro.js`
|
17
|
-
5. All js files in `./products/$PLATFORM/glob/{2,3}kern.pre_macro.js` are run through `./
|
17
|
+
5. All js files in `./products/$PLATFORM/glob/{2,3}kern.pre_macro.js` are run through `./lib/flok/macro.rb's macro_process` and then sent to `./products/$PLATFORM/glob/{2,3}kern.js`
|
18
18
|
6. All js files are globbed from `./products/$PLATFORM/glob` and combined into `./products/$PLATFORM/glob/application.js.erb`
|
19
19
|
7. Auto-generated code is placed at the end (like PLATFORM global)
|
20
20
|
8. The module specific code in `./kern/mod/.*js` are added when the name of the file (without the js part) is mentioned in the `./app/drivers/$PLATFORM/config.yml` `mods` section and appended to `glob/application.js.erb`
|
data/docs/dispatch.md
ADDED
@@ -0,0 +1,91 @@
|
|
1
|
+
#Dispatching of messages
|
2
|
+
Most javascript implementations implement a sandbox where messages between the javascript core and the client is done via an access controlled xpc system. These xpc systems generally serialize
|
3
|
+
the data to be transferred and then join the requesting processes run queue to complete the request so that the process is charged with the xpc transfer. The longer this XPC transfer takes,
|
4
|
+
the more likely the process is going to get pre-empted in the middle of the transfer and have to wait to continue the transfer until the process is rescheduled. It is in our best interest
|
5
|
+
to avoid this as it adds large amounts of latency to the application; many small transfers are preferrable to large transfers unless it is a synchronous request. For synchronous requests,
|
6
|
+
we will be forced to block anyway, so it makes sense to allow large tranfers (but caution againts them) in synchronous requests.
|
7
|
+
|
8
|
+
In order to relieve this problem, *flok* restricts the number of pipelined messages **per queue** to 5 with the exception of the `main` queue (the only synchronous queue). That means you
|
9
|
+
can have a total of `(N*5)` messages assuming there are `N` queue types (at the time of this writing, there are 5 not including the `main` queue). It is unlikely that all queues will be used
|
10
|
+
as most requests on the flok client will not use multiple resources in one pipelined stage. The client is responsible for requesting more data until no more data is available.
|
11
|
+
|
12
|
+
##Confusion about synchronous and asynchronous
|
13
|
+
There are various stages of message processing so it can be confusing as to what is excatly synchronous and asynchronous. Flok assumes a few things
|
14
|
+
1. The disptach mechanism, `int_dispatch`, is always called by the client synchronously, and the javascript core will always respond synchronously to `if_disptach`.
|
15
|
+
2. The client `if_dispatch` handler will then process the main queue on it's same synchronous thread and then dispatch, asynchronously, the remaining queues; the queues may either each dispatch messages asynchronously or synchronously w.r.t to the original queue. (out of order and parallel are supported)
|
16
|
+
|
17
|
+
Additionally, it is always ok, but not suggested, to downgrade an asynchronous request to a synchronous request. But you can **never** downgrade a synhcronous request to an asynchronous request. Synchronous requests must be done in order and on a single thread; additionally, they can be UI requests which are typically handled on the main thread.
|
18
|
+
|
19
|
+
For example, if we dispatch on the `main` queue a disk read request, flok would expect that the disk read would block the javascript core and return execution as soon as the disk read completed. Flok would also presume that the disk read was done at the fastest
|
20
|
+
and highest priority of IO and CPU.
|
21
|
+
|
22
|
+
Flok would expect that same disk requets, dispatched on an asynhcronous queue, like `disk`, that the request would not execute on the same thread of execution and could execute out of order.
|
23
|
+
|
24
|
+
##The standard Flok queues (resources) are defined with the labels:
|
25
|
+
0. `main` - User-interface displaying, etc.
|
26
|
+
1. `net` - Downloading, Uploading, Get requests, etc.
|
27
|
+
2. `disk` - Transferring things to/from disk
|
28
|
+
3. `cpu` - Tasks that tax the cpu
|
29
|
+
4. `gpu` - Tasks that tax the gpu
|
30
|
+
|
31
|
+
##Messages from the server
|
32
|
+
Messages sent via `if_dispatch` to the server have a special format that looks like this:
|
33
|
+
```javascript
|
34
|
+
msg = [
|
35
|
+
[0, 0, "ping", 1, "ping2", "hello"],
|
36
|
+
[1, 1, "download_image", "http://testimage.com/test.png"],
|
37
|
+
[4, 1, "blur_button", 23]
|
38
|
+
]
|
39
|
+
```
|
40
|
+
|
41
|
+
The message is broken up into *3* distinct queues. The first queue, queue 0, is the **main** queue. Each queue should be interpreted in order. That
|
42
|
+
means the *main* queue will always be synchronously executed before the rest of the queues are asynchronously dispatched. The `download_image` is
|
43
|
+
apart of the `net` queue, and the *gpu* is part of queue 4. Look above at *Resource Labels* to see what each queue is.
|
44
|
+
|
45
|
+
##Example of a session where the flok server does not respond with all messages right away to a client
|
46
|
+
Imagine that a flok server has the following available in it's queues for transfer in int_dispatch
|
47
|
+
```javascript
|
48
|
+
main_q = [[0, "ping", [0, "ping"], [0, "ping"], [0, "ping"], [0, "ping"], [0, "ping"],
|
49
|
+
net_q = [[1, "download", "..."], [1, "download", "..."], [1, "download", "..."], [1, "download", "..."], [1, "download", "..."], [1, "download", ...] ,
|
50
|
+
gpu_q = [[1, "blur_button", 23]]
|
51
|
+
```
|
52
|
+
The `main_q` contains over 5 messages. However, because the `main_q` is dispatched synchronously, we will send those all at once. The `net_q` has
|
53
|
+
6 messages; so we will only send 5 of those at once. The `gpu_q` only contains 1 message, so we will send that at once.
|
54
|
+
|
55
|
+
The client then calls `int_dispatch`:
|
56
|
+
```javascript
|
57
|
+
res = int_disptach(...)
|
58
|
+
```
|
59
|
+
|
60
|
+
And it receives this in `res`:
|
61
|
+
```javascript
|
62
|
+
'i',
|
63
|
+
[0, 0, "ping", 0, "ping", 0, "ping", 0, "ping", 0, "ping", 0, "ping"],
|
64
|
+
[1, 1, "download", "..."], 1, "download", "...", 1, "download", "...", 1, "download", "...", 1, "download", "..."]
|
65
|
+
[4, 1, "blur_button", 23]
|
66
|
+
```
|
67
|
+
|
68
|
+
Notice how it's the same as the int_dispatch from the server except that queue 1 (`net_q`) is missing 1 message ([1, "download", "..."]). The 'i' at the start
|
69
|
+
indicates that the request is 'incomplete' and the client should request with a blank request array following completion of dequing all these events.
|
70
|
+
So the flok server still ha the following in it's queues. The `net_q` will be transfered after the next client request which will take place
|
71
|
+
after the `int_dispatch` call as the client should always call `int_dispatch` as many times until it gets a blank que `int_dispatch` as many times until it gets a blank queue.
|
72
|
+
|
73
|
+
Note that:
|
74
|
+
While at first you might think we need to test that int_dispatch called intra-respond of our if_event needs to test whether or not we still send
|
75
|
+
out blank [] to int_dispatch; this is not the case. In the real world, flok is supposed to also make any necessary if_disptach calls during all
|
76
|
+
int_dispatch calls. We would always receive back if_dispatch; and thus it would follow the same rules as layed out here
|
77
|
+
|
78
|
+
```javascript
|
79
|
+
main_q = [0]
|
80
|
+
net_q = [1, 1, "download", ...]
|
81
|
+
gpu_q = [4]
|
82
|
+
```
|
83
|
+
|
84
|
+
##Spec helpers
|
85
|
+
|
86
|
+
###Kernel
|
87
|
+
The kernel has the function in `@debug`
|
88
|
+
* `spec_dispatch_q(queue, count)` - Which will internally queue the message [0, "spec"] to the queue given in `queue` `count` times
|
89
|
+
|
90
|
+
###Driver
|
91
|
+
`dispatch_spec` to assist with testing of the 'i' re-request behavior.
|
data/docs/kernel_api.md
CHANGED
@@ -12,6 +12,9 @@ instead.
|
|
12
12
|
##CRC32
|
13
13
|
* `crc32(seed, str)` - Will calculate a CRC32 based on a seed and a string
|
14
14
|
|
15
|
+
##Random string
|
16
|
+
* `gen_id()` - Will return a random unique id (8 character string).
|
17
|
+
|
15
18
|
##Events
|
16
19
|
* `reg_evt(ep, f)` - Register a function to be called when an event is processed by `int_event`. The function will receive `(ep, event_name, info)`.
|
17
20
|
|
@@ -36,5 +39,3 @@ variables in here. If you need to pass a hash literal, array literal, etc, plea
|
|
36
39
|
var payload = {from: null, to: action};
|
37
40
|
SEND("main", "if_event", base, "action", payload);
|
38
41
|
```
|
39
|
-
|
40
|
-
|
data/docs/messaging.md
CHANGED
@@ -36,12 +36,13 @@ live in `./app/kern/mod/` and have the convention of being called `int_*`.
|
|
36
36
|
|
37
37
|
On the client, the driver decides on how messages are handled. At a minimum, the client must support the `if_dispatch` function
|
38
38
|
call. The driver is given a queue suggestion based on the first number for each message queue in the `if_dispatch` call. See
|
39
|
-
[
|
39
|
+
[Dispatching](./disptach.md) for more information.
|
40
40
|
|
41
41
|
### Ping
|
42
42
|
Both the client and server are responsible for being able to reply to a few test messages.
|
43
43
|
|
44
44
|
#####For the client
|
45
|
+
- Given `[[0, 0, "ping_nothing"]]`, do nothing. Used for `dispatch_spec`
|
45
46
|
- Given `[[0, 0, "ping"]]` respond with `[0, pong]`
|
46
47
|
- Given `[[0, 1, "ping1", arg]]` respond with `[1, pong1, arg]`
|
47
48
|
- Given `[[0, 2, "ping2", arg1, arg2]]` respond with `[1, "pong2", arg1]` and `[2, "pong2", arg1, arg2]`
|
@@ -73,6 +74,10 @@ Both the client and server are responsible for being able to reply to a few test
|
|
73
74
|
- Given `[0, "ping4_int"]` respond with `[[queue_index, 0, "pong4"]]`
|
74
75
|
- *If the queue_index is 0 (main), it should queue all 6*
|
75
76
|
|
77
|
+
### Dispatch Spec
|
78
|
+
|
79
|
+
- Given `['i', *]` for a queue will force the client to request another queue after it is done processing.
|
80
|
+
|
76
81
|
### Protocols
|
77
82
|
Protocols are informal conventions used in Flok when sending certain messages.
|
78
83
|
|
data/docs/mod/persist.md
CHANGED
@@ -3,7 +3,7 @@ Persistance management. Loosely based on redis.
|
|
3
3
|
|
4
4
|
###Driver messages
|
5
5
|
`if_per_set(ns, key, value)` - Set a key and value
|
6
|
-
`if_per_get(s, ns, key)` - Get a key's value, a message `int_get_res` will be sent back
|
6
|
+
`if_per_get(s, ns, key)` - Get a key's value, a message `int_get_res` will be sent back, `s` is the session key that will also be sent back
|
7
7
|
`if_per_del(ns, key)` - Delete a particular key
|
8
8
|
`if_per_del_ns(ns)` - Delete an entire namespace
|
9
9
|
|
@@ -15,5 +15,6 @@ It is expected that the kernel should manage the write-back cache and that the d
|
|
15
15
|
it is convenient to do so.
|
16
16
|
|
17
17
|
###Kernel interrupts
|
18
|
-
`int_per_get_res(s, res)` - A response retrieved from `if_per_get` that contains the session key and result dictionary.
|
19
|
-
does not
|
18
|
+
`int_per_get_res(s, ns, res)` - A response retrieved from `if_per_get` that contains the session key and result dictionary. Currently,
|
19
|
+
the service `vm` owns this function; so session does not have an effect on the outcome; but the string `"vm"` should be used for now for any
|
20
|
+
session keys involving persist.
|
data/docs/project_layout.md
CHANGED
@@ -5,5 +5,5 @@
|
|
5
5
|
* `app/drivers/$PLATFORM/` - Platform specific way to implement the interface. See [platform drivers](./platform_drivers.md) for information.
|
6
6
|
* `app/kern` - The remaining part, your app, the kernel, etc. all live under here.
|
7
7
|
* `app/kern/mod` - Interrupt handlers for drivers and associated code.
|
8
|
-
|
9
|
-
|
8
|
+
* `/lib/kern/macro.rb` - Contains code that is called by `./lib/flok/build.rb` to run all kernel *js* code through as well as the `services_compiler`
|
9
|
+
* This macro file provides various macros used in the kernel and services like `SEND`.
|
data/docs/services/vm.md
CHANGED
@@ -11,31 +11,34 @@ Fun aside; Because of the hashing schemantics; this paging system solves the age
|
|
11
11
|
Each page is a dictionary containing a list of entries.
|
12
12
|
```ruby
|
13
13
|
page_example = {
|
14
|
-
_head: <<uuid STR>>,
|
15
|
-
_next: <<uuid STR>,
|
14
|
+
_head: <<uuid STR or NULL>>,
|
15
|
+
_next: <<uuid STR or NULL>,
|
16
16
|
_id: <<uuid STR>,
|
17
17
|
entries: [
|
18
|
-
{_id: <<uuid STR>>,
|
18
|
+
{_id: <<uuid STR>>, _sig: <<random_signature for inserts and modifies STR>>},
|
19
19
|
...
|
20
20
|
],
|
21
21
|
_hash: <<CRC32 >
|
22
22
|
}
|
23
23
|
```
|
24
24
|
|
25
|
-
* `_head (
|
26
|
-
* `_next (
|
27
|
-
* `_id` - The name of this page. Even if every key changed, the `_id` will not change. This is supposed to indicate, semantically, that this page still *means* the same thing. For example, imagine a page. If all entries were to be **removed** from this page and new entries were **inserted** on this page, then it would be semantically sound to say that the entries were **changed**.
|
28
|
-
* `entries` - An array of dictionaries. Each element contains a `_id` that is analogous to the page `_id`. (These are not the same, but carry the same semantics). Entries also have a `
|
29
|
-
* `_hash` - All entry `_id's`, `_next`, the page `_id`, and `head` are hashed togeather. Any changes to this page will cause this `_hash` to change which makes it a useful way to check if a page is modified and needs to be updated. The hash function is an ordered CRC32 function run in the following order. See [Calculating Page Hash](#calculating_page_hash).
|
25
|
+
* `_head (string or null)` - An optional pointer that indicates a *head* page. The head pages are special pages that contain 0 elements in the entries array, no `_head` key, and `_next` points to the *head* of the list. A head page might be used to pull down the latest news where the head will tell you whether or not there is anything left for you to receive.
|
26
|
+
* `_next (string or null)` - The next element on this list. If `_next` is non-existant, then this page is the endpoint of the list.
|
27
|
+
* `_id (string)` - The name of this page. Even if every key changed, the `_id` will not change. This is supposed to indicate, semantically, that this page still *means* the same thing. For example, imagine a page. If all entries were to be **removed** from this page and new entries were **inserted** on this page, then it would be semantically sound to say that the entries were **changed**.
|
28
|
+
* `entries (array of hashes)` - An array of dictionaries. Each element contains a `_id` that is analogous to the page `_id`. (These are not the same, but carry the same semantics). Entries also have a `_sig` based on their creation or edit time from the unix epoch milliseconds.
|
29
|
+
* `_hash (string)` - All entry `_id's`, `_next`, the page `_id`, and `head` are hashed togeather. Any changes to this page will cause this `_hash` to change which makes it a useful way to check if a page is modified and needs to be updated. The hash function is an ordered CRC32 function run in the following order. See [Calculating Page Hash](#calculating_page_hash).
|
30
30
|
|
31
31
|
------
|
32
32
|
|
33
33
|
## <a name='calculating_page_hash'></a>Calculating Page Hash
|
34
34
|
The `_hash` value of a page is calculated in the following way:
|
35
|
-
|
36
|
-
|
35
|
+
0. `z = 0`
|
36
|
+
1. `z = crc32(z, _head) if _head`
|
37
|
+
2. `z = crc32(z, _next) if _next`
|
37
38
|
3. `z = crc32(z, _id)`
|
38
|
-
4. `z = crc32(z, entriesN.
|
39
|
+
4. `z = crc32(z, entriesN._sig)` where N goes through all entries in order.
|
40
|
+
|
41
|
+
If a key is null, then the crc step is skipped for that key. e.g. if `_head` was null, then `z = crc32(0, _head)` would be skipped
|
39
42
|
|
40
43
|
Assuming a crc function of `crc32(seed, string)`
|
41
44
|
|
@@ -69,60 +72,132 @@ you will want to copy your pager into a seperate piece of code and rename it so
|
|
69
72
|
##Requests
|
70
73
|
|
71
74
|
###`watch`
|
72
|
-
This is how you
|
75
|
+
This is how you **read a page** and request notifications for any updates to a page. The following happens when you watch a page:
|
76
|
+
```js
|
77
|
+
if (page is resident in memory from previous cache write)
|
78
|
+
send the caller a read_res event *now*
|
79
|
+
|
80
|
+
increment_page_ref()
|
81
|
+
|
82
|
+
//Synchronously request disk load from cache; this will block
|
83
|
+
//Even if we have a request in progress; the synchronous
|
84
|
+
//may pre-empt that event because the disk queue might be loaded;
|
85
|
+
//so we need to send this anyway
|
86
|
+
if (page is not redisent in memory and synchronous) {
|
87
|
+
try_sync_load_from_disk_and_update_cache()
|
88
|
+
}
|
89
|
+
|
90
|
+
//Only notify if this is the first reference, other controllers who attempt a watch will not signal the pager because the pager already knows
|
91
|
+
//about this page
|
92
|
+
if first_reference {
|
93
|
+
pager_watch()
|
94
|
+
}
|
73
95
|
|
74
|
-
|
96
|
+
//Again, only attempt this if the page is not requested by anyone else and is not synchronous (because we would have already tried). The pager will be notified in the meantime, if the disk
|
97
|
+
//comes after the pager notification; then the disk will not do anything.
|
98
|
+
if (page is not resident in memory && not_synchronous) {
|
99
|
+
//This is an asynchronous request
|
100
|
+
try_load_from_disk_and_update_cache()
|
101
|
+
}
|
102
|
+
```
|
75
103
|
* Parameters
|
76
104
|
* `ns` - The namespace of the page, e.g. 'user'
|
77
105
|
* `id` - Watching the page that contains this in the `_id` field
|
106
|
+
* `sync (optional)` - If set to `true` then the disk read will be performed synchronously.
|
78
107
|
* Event Responses
|
79
108
|
* `read_res` - Whenever a change occurs to a page or the first read.
|
80
109
|
* `ns` - Namespace of the fault
|
81
110
|
* `first` - A boolean that indicates whether this page was ever received on `page_update` before. i.e. is it a change after we were already given this page previously in a `page_update` for this receiver?
|
82
111
|
* `page` - A dictionary object that is a reference to the page. This should be treated as immutable as it is a shared resource.
|
112
|
+
* Debug mode
|
113
|
+
* When `@debug`, an exception will be thrown if you attempt to watch the same key from one controller multiple times.
|
114
|
+
|
115
|
+
###`unwatch`
|
116
|
+
This is how you **unwatch** a page. For view controllers that are destroyed, it is not necessary to manually `unwatch` as the `vm` service will be notified on it's disconnection and automatically remove any watched pages for it's base pointer. This should be used for thingcs like scroll lists where the view controller is no longer interested in part of a page-list.
|
83
117
|
|
84
|
-
###`read_sync`
|
85
|
-
Request a page of memory synchronously. This will only trigger one `read_res`. If a page does not exist, that should be considered an error. You would normally use this with a blank pager that relies on the cache system to recover data that is either resident in RAM or load it from disk. For example, maybe you would like to display the user's name when they first login without waiting.
|
86
118
|
* Parameters
|
87
119
|
* `ns` - The namespace of the page, e.g. 'user'
|
88
|
-
* `id` -
|
89
|
-
* Event Responses
|
90
|
-
* `read_res` - Whenever a change occurs to a page or the first read.
|
91
|
-
* `ns` - Namespace of the fault
|
92
|
-
* `first` - A boolean that indicates whether this page was ever received on `page_update` before. i.e. is it a change after we were already given this page previously in a `page_update` for this receiver?
|
93
|
-
* `page` - A dictionary object that is a reference to the page. This should be treated as immutable as it is a shared resource.
|
94
|
-
* Debug quirks
|
95
|
-
* Sets `vm_read_sync_called` to true when called
|
120
|
+
* `id` - Unwatch the page that contains this in the `_id` field
|
96
121
|
|
97
|
-
###`
|
98
|
-
Creates a new page or overrides an existing one.
|
99
|
-
|
122
|
+
###`write`
|
123
|
+
Creates a new page or overrides an existing one. If you are modifying an existing page, it is imperative that you do not modify the page yourself and
|
124
|
+
use the modification helpers. These modification helpers implement copy on write (COW) as well as adjust sigs on specific entries and create ids for new entries. The proper way to do it is (a) edit the page with the modification helpers mentioned in [User page modification helpers](#user_page_modification_helpers) and (b) perform a write request. This request updates the `_hash` field. Additionally, if you are creating a page, it is suggested that you still use the modification helpers; just use the `NewPage` macro insead of `CopyPage`. Additionally, modifiying a page after making a write request is prohibited as the `vm` service may alter your page.
|
125
|
+
* Parameters
|
100
126
|
* `ns` - The namespace of the page, e.g. 'user'
|
101
|
-
* `
|
102
|
-
|
103
|
-
* `
|
104
|
-
|
127
|
+
* `page` - The page to write (create or update)
|
128
|
+
* Spec helpers
|
129
|
+
* If in `@debug` mode, the variable `vm_write_list` contains an array dictionary of the last page passed to the pager (tail is latest).
|
130
|
+
|
131
|
+
##Cache
|
132
|
+
See below with `vm_cache_write` for how to write to the cache. Each pager can choose whether or not to cache; some pagers may cache only reads while others will cache writes. Failure to write to the cache at all will cause `watch` to never trigger. Some pagers may use a trick where writes are allowed, and go directly to the cache but nowhere else. This is to allow things like *pending* transactions where you can locally fake data until a server response is received which will both wipe the fake write and insert the new one. Cache writes will trigger `watch`; if you write to cache with `vm_cache_write` with a page that has the same `_hash` as a page that already exists in cache, no `watch` events will be triggered. Additionally, calling `vm_cache_write` with a non-modified page will result in no performance penalty.
|
133
|
+
|
134
|
+
###Pageout & Cache Synchronization
|
135
|
+
Cache will periodically be synchronized to disk via the `pageout` service. When flok reloads itself, and the `vm` service gets a `watch` or `watch_sync` request, the `vm` service will attempt to read from the `vm_cache` first and then read the page from disk (write that disk read to cache). The only difference between `watch_sync` and `watch` is that `watch_sync` will synchronously pull from disk and panic if there is no cache available for the page). (Both `watch` and `watch_sync` will always call the pager's after the cache read as well.)
|
136
|
+
|
137
|
+
Pageout is embodied in the function named `vm_pageout()`. This will asynchronously write `vm_dirty` to disk and clear `vm_dirty` once the write has been commited. `vm_pageout()` is called every minute by the interval timer in this service.
|
138
|
+
|
139
|
+
###Datatypes & Structures (Opaque, do not directly modify)
|
140
|
+
* `vm_cache` - The main area for storing the cache. Stored in `vm_cache[ns][key]`
|
141
|
+
* `vm_dirty` - Pages recently written to cache go on the dirty list so that they may be written when the pageout handler runs. Dictionary contains map for `vm_dirty[ns][page._id] => page` for all dirty pages. Pages are removed from the dictionary when they are written in the pageout.
|
142
|
+
* `vm_notify_map` - The dictionary used to lookup what controllers need to be notified about changes. Stored in `vm_notify_map[ns][id]` which yields an array of controller base pointers.
|
143
|
+
* `vm_bp_to_nmap` - A dictionary that maps a `bp` key (usually from a controller) to a dictionary. This dictionary contains a mapping of `bp => ns => id` to an array that contains `[node, index]` where `node` is a reference to `vm_notify_map[ns][id]`. This inverted map must (a) provide a way for `unwatch` to quickly remove entries from itself and (b) provide a way for all entries in `vm_notify_map` to be removed when something (usually a controller) disconrnects.
|
144
|
+
must support `unwatch` removal which we only receive the `bp`, `ns`, and `key`.
|
105
145
|
|
106
146
|
##Helper Methods
|
107
147
|
###Pager specific
|
108
|
-
* `vm_cache_write(ns,
|
148
|
+
* `vm_cache_write(ns, page)` - Save a page to cache memory. This will not recalculate the page hash. The page will be stored in `vm_cache[ns][id]` by.
|
109
149
|
|
110
150
|
###Page modification
|
111
|
-
* `vm_rehash_page(page)` - Calculates the hash for a page and modifies that page with the new `_hash` field.
|
151
|
+
* `vm_rehash_page(page)` - Calculates the hash for a page and modifies that page with the new `_hash` field. If the `_hash` field does not exist, it
|
152
|
+
will create it
|
112
153
|
|
113
|
-
### <a name='user_page_modification_helpers'></a>User page modification helpers
|
154
|
+
### <a name='user_page_modification_helpers'></a>User page modification helpers (Controller Macros)
|
114
155
|
You should never directly edit a page in user land; if you do; the pager has no way of knowing that you made modifications. Additionally, if you have multiple controllers watching a page, and it is modified in one controller, those other controllers
|
115
|
-
will not receive the notifications of the page modifications.
|
156
|
+
will not receive the notifications of the page modifications. Once using these modifications, you must make a request for `write`. You should not use the information you updated to update your controller right away; you should wait for a `read_res` back because you `watched` the page you just updated. This will normally be performed right away if it's something like the memory pager.
|
157
|
+
|
158
|
+
Aside, modifying a page goes against the semantics of the vm system; you're thinking of it wrong if you think that's ok. The VM system lets the pager decide what the semantics of a `write` actually means. That may mean it does not directly modify the page; maybe it sends the write request to a server which then validates the request, and then the response on the watched page that was modified will then update your controller.
|
159
|
+
|
160
|
+
If you're creating a new page, please use these macros as well; just switch out `CopyPage` for `NewPage`.
|
116
161
|
|
117
|
-
**These are only for existing pages; that is, pages that have been received through `read_res`. If you need to create a new page, do so through `create`**
|
118
162
|
####Per entry
|
119
|
-
* `
|
120
|
-
|
121
|
-
* `
|
163
|
+
* `NewPage(id)` - Returns a new blank page; internally creates a page that has a null `_next`, `_head`, and `entries` array with 0 elements.
|
164
|
+
`_id` is generated if it is not passed.
|
165
|
+
* `CopyPage(page)` - Copies a page and returns the new page. Internally this copies the entire page with the exception of the
|
166
|
+
`_hash` field.
|
167
|
+
* `EntryDel(page, eindex)` - Remove a single entry from a page. (Internally this deletes the array entry)
|
168
|
+
* `EntryInsert(page, eindex, entry)` - Insert an entry, entry should be a dictionary value. (Internally this inserts the entry with a unique `_sig` and creates a unique `_id`)
|
169
|
+
* `EntryMutable(page, eindex)` - Returns a mutable entry at a specific index which you can then modify.
|
170
|
+
* `SetPageNext(page, id)` - Sets the `_next` id for the page
|
171
|
+
* `SetPageHead(page, id)` - Sets the `_head` id for the page
|
172
|
+
|
173
|
+
Here is an example of a page being modified inside a controller after a `read_res`
|
174
|
+
```js
|
175
|
+
on "read_res", %{
|
176
|
+
//Copy page and modify it
|
177
|
+
var page = Copy(params.page);
|
178
|
+
|
179
|
+
//Remove first entry
|
180
|
+
EntryDel(page, 0);
|
181
|
+
|
182
|
+
//Insert an entry
|
183
|
+
var my_entry = {
|
184
|
+
z = 4;
|
185
|
+
}
|
186
|
+
EntryInsert(page, 0, my_entry);
|
187
|
+
|
188
|
+
//Change an entry
|
189
|
+
var e = EntryMutate(page, 1);
|
190
|
+
e.k = 4;
|
191
|
+
e.z = 5;
|
192
|
+
|
193
|
+
//Write back page
|
194
|
+
var info = {page: page, ns: "user"};
|
195
|
+
Request("vm", "write", info);
|
196
|
+
}
|
197
|
+
```
|
122
198
|
|
123
|
-
|
124
|
-
|
125
|
-
* `set_page_head(page, hash)` - Sets the `_head` hash for the page
|
199
|
+
##Pagers
|
200
|
+
See [Pagers](./vm/pagers.md) for information for pager responsibilities and how to implement them.
|
126
201
|
|
127
202
|
##Spec helpers
|
128
203
|
The variable `vm_did_wakeup` is set to true in the wakeup part of the vm service.
|
data/docs/services/vm/pagers.md
CHANGED
@@ -1,46 +1,38 @@
|
|
1
|
-
#
|
2
|
-
|
3
|
-
|
4
|
-
|
5
|
-
|
6
|
-
|
7
|
-
A
|
8
|
-
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
*
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
##
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
* `init(options)` - Will set the `spec0_init_options` to be what ever options it got.
|
40
|
-
* `read` - Will set the `spec0_read_sync_called` to be true.
|
41
|
-
* `read_sync` - Will set the `spec0_read_sync_called` to be true.
|
42
|
-
###`spec1`
|
43
|
-
This pager is designed to test the read-sync-notify notification system. When this function is first called,
|
44
|
-
it will return 'a' for any value. The second call to read will return `b`.
|
45
|
-
* Supported operations
|
46
|
-
* `init(options)`
|
1
|
+
#Virtual Memory Pagers
|
2
|
+
If you haven't already, read [VM Service](../vm.md) for context on pagers.
|
3
|
+
|
4
|
+
------
|
5
|
+
##Functions required for a pager
|
6
|
+
* `$NAME_init(ns, options)` - Initialize your pager with a namespace (`ns`) and a set of options passed in the `service :vm` options for this pager (See [VM Service](../vm.md)) for example of options hash.
|
7
|
+
* `$NAME_watch(id, page)` - A watch request has been placed for a page id. Multiple watch requests in the *vm service* **will not show up here**.
|
8
|
+
You will only get one watch rquest until you receive an unwatch request. You should attempt to update the page for that key as soon as possible
|
9
|
+
and then wait for future updates. Page is the either a cached page or `undefined`. You should never modify this directly, most pagers should use
|
10
|
+
`_hash` to check with a server if the page needs updating at this point. Some pagers may pre-fetch more pages if there is a `_next`.
|
11
|
+
* `$NAME_unwatch(id)` - There are no controllers that are watching the page with a page that contains this in the `_id` field
|
12
|
+
* `$NAME_write(page)` - You should write this page, e.g. to network, and/or write to `vm_cache_write`. Alternatively, you can write the page over the network and then let the response from that call `vm_cache_write` in what ever listening code you have.
|
13
|
+
* `page` - A fully constructed page with correctly calculated `_hash` and _sigs on entries.
|
14
|
+
|
15
|
+
|
16
|
+
|
17
|
+
##When are pagers invoked?
|
18
|
+
Pagers handle all requests from controllers except for the following conditions:
|
19
|
+
1. There is a `watch` request placed but a previous `watch` request already exists for the requested page. The pager is already aware of the page watch request and is already waiting for a response. Cached pages would have been returned to the controller that made the `watch` request.
|
20
|
+
|
21
|
+
##Where to put pagers
|
22
|
+
A new pager class can be created by adding the pager to the `./app/kern/services/pagers/*.js`. Please remember that we do not currently support multiple pager instances for each class; while there is a namespace distinction that could be used to instantize the pager; we do not support statically generating multiple copies of the global variables needed per instance.
|
23
|
+
|
24
|
+
Please name your pagers `pg_XXXX` to help make it clear that you are writing a pager.
|
25
|
+
|
26
|
+
##Built-in Pagers
|
27
|
+
|
28
|
+
####Default memory pager | `pg_mem0`
|
29
|
+
The *default memory pager* does not do anything on `watch` or `unwatch`. It depends on the cache to reply to `watch` and `watch_sync` requests created by controllers. Controllers may write to this pager via `write` which this pager will then send directly to `vm_cache_write`. This pager is always compiled into the kernel.
|
30
|
+
|
31
|
+
####Spec pager | `pg_spec0`
|
32
|
+
This pager does the following when calls are made to it's functions, it's designed to assist with `vm` kernel specs.
|
33
|
+
* `init` - Sets `pg_spec0_init_params` to `{ns: ns, options: options}`
|
34
|
+
* `watch` - Appends `{id: id, hash: hash}` to `pg_spec0_watchlist`
|
35
|
+
* `unwatch` - appends id to `pg_spec0_unwatchlist`
|
36
|
+
* `write` - Writes the given page to `vm_cache_write`
|
37
|
+
|
38
|
+
This pager only exists if the environment is in `DEBUG` mode (`@debug` is enabled).
|