@latticexyz/services 2.0.0-alpha.0 → 2.0.0-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,164 +1,138 @@
1
1
  # Services
2
2
 
3
- This package contains MUD services -- complimentary software components for enhanced interactions with on-chain ECS state when building with MUD. Services are designed to work with the ECS data representations and work out-of-the-box with any project built with MUD. Every service is a stand-alone Go binary that can be run connected to a chain that a MUD application is deployed to. Refer below for more technical details and to the linked entry-points for each service for details such as required and optional command-line arguments that allow you to customize each service.
3
+ This package contains MUD services -- complimentary software components for enhanced interactions with on-chain state when building with MUD. Services work out-of-the-box with any project built with MUD.
4
4
 
5
- | Service | Description | Proto / Spec | Default Port |
6
- | ------------ | :--------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------- | -----------: |
7
- | ecs-snapshot | Indexer reducing ECS events into a single "current" state for fast snapshot client syncs | [ecs-snapshot.proto](./proto/ecs-snapshot.proto) | 50061 |
8
- | ecs-stream | Multiplexer for subscriptions to receive current block data, ECS events per block, and transaction origination data | [ecs-stream.proto](./proto/ecs-stream.proto) | 50051 |
9
- | ecs-relay | Generic message relayer, supporting signed messages, service-side signature verification, relay conditions for DDoS prevention, and more | [ecs-relay.proto](./proto/ecs-relay.proto) | 50071 |
10
- | faucet | Faucet supporting custom drip amounts, global limits, twitter verification, and integrations with MUD components | [faucet.proto](./proto/faucet.proto) | 50081 |
11
-
12
- ## Technical Details
5
+ ## V2 Services
13
6
 
14
- Every service is a Go stand-alone binary that can be run individually. Entry-points (`main.go` files) for each service can be found linked in each sub-section below.
7
+ ### [📄 Docs](#)
15
8
 
16
- #### General
9
+ The following services are available for use with MUD V2. For more details on each service, see the linked docs page.
17
10
 
18
- Each service exposes a gRPC server and a wrapper HTPP server (for ability to make gRPC wrapped requests from a web client, e.g. TypeScript MUD client). By default the gRPC server runs at the default `PORT` (specified above and in each `main.go` file) and the HTTP server runs at that `PORT + 1`. For example, the snapshot service has a gRPC server exposed on `50061` and a wrapper server is automatically exposed on `50062`.
11
+ | Service | Description | Proto / Spec | Default Port |
12
+ | ------- | :------------------------------------------------------------------------------------ | :------------------------------- | -----------: |
13
+ | mode | A node for MUD. PostgresDB-based indexer of MUD V2 events across chains + MUD worlds. | [mode.proto](./proto/mode.proto) | 50091 |
19
14
 
20
- Each service has specific command-line arguments. Each service requires a connection to an Ethereum node (for same network where your MUD application is deployed on) via a websocket. By default, all websocket connection URL parameters use a `localhost` instance running at port `8545`, so the full URL is `ws://localhost:8545`.
15
+ ### 🏃 Quickstart
21
16
 
22
- #### Dockerfile
17
+ #### Running the MODE service
23
18
 
24
- There are Dockerfiles for each service available at the root of this repo -- `Dockerfile.{faucet|relay|snapshot|stream}`. Note that if you want to modify the Dockerfiles, one thing to make sure of is the exposed port to matching the port that each binary is configured to listen to by default.
25
-
26
- Each service can be built and used within a Kubernetes cluster (via a resource that can pull the container image) by pushing the images to a container registry. For example, to build the snapshot server via the Dockerfile, we can build the image
19
+ 1. Install Go
20
+ 2. Install Postgres
21
+ 3. Deside which database you want to use, e.g. `mode` and create the database
22
+ 4. Set up logical replication on the database of your choice, e.g. `mode`. Logical replication is used to enable fast MUD state change streaming using the WAL (Write-Ahead Log) of the database. For more infromation on logical replication, see the [Postgres documentation](https://www.postgresql.org/docs/current/logical-replication.html). For this you need to
23
+ 1. Modify the DB config to use logical replication. This is done by adding the following to the `postgresql.conf` file:
24
+ ```
25
+ wal_level = logical
26
+ max_replication_slots = 1
27
+ max_wal_senders = 1
28
+ ```
29
+ alternatively you can use the following SQL commands:
30
+ ```sql
31
+ ALTER SYSTEM SET wal_level = logical;
32
+ ALTER SYSTEM SET max_replication_slots = 1;
33
+ ALTER SYSTEM SET max_wal_senders = 1;
34
+ ```
35
+ 2. Restart the DB
36
+ 5. Build the source. This will build the MODE service
27
37
 
28
- ```
29
- docker build -f Dockerfile.snapshot . --tag ghcr.io/latticexyz/mud-ecs-snapshot:<YOUR_TAG>
38
+ ```bash
39
+ make mode
30
40
  ```
31
41
 
32
- and then push to the container registry
42
+ 6. Modify `config.mode.yaml` MODE config file to match your preferences. MODE can be configured either with a config file or via command line arguments (both will do the same thing). In this step you should probably change the `dsn` section to match your database name created in step (3): this is what MODE uses to connect to Postgres. Additionally change the `chains` section in case you'd like to connect and have MODE index a different chain other than a local node. You can also change the `port`s to match your preferences. Example config file:
33
43
 
44
+ ```yaml
45
+ chains:
46
+ - name: "localhost"
47
+ id: "371337"
48
+ rpc:
49
+ http: "http://localhost:8545"
50
+ ws: "ws://localhost:8545"
51
+ db:
52
+ dsn: "postgresql://localhost:5432/mode_ephemeral?sslmode=disable&replication=database"
53
+ wipe: false
54
+ sync:
55
+ enabled: true
56
+ startBlock: 0
57
+ blockBatchCount: 10000
58
+ ql:
59
+ port: 50091
60
+ metrics:
61
+ port: 6060
34
62
  ```
35
- docker push ghcr.io/latticexyz/mud-ecs-snapshot:<YOUR_TAG>
36
- ```
37
-
38
- #### Protobuf
39
-
40
- We use [Protocol Buffers](https://developers.google.com/protocol-buffers) to define the data structures and message schemas for each service. The `.proto` files are available in the `/proto` directory at the root of this repo -- `/proto/{ecs-relay|ecs-snapshot|ecs-stream|faucet}.proto`. For more details about `.proto` files and a language guide, see the [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3).
41
-
42
- #### gRPC
43
63
 
44
- We use [gRPC](https://grpc.io/docs/what-is-grpc/introduction/) along with protobuf for the complete Interface Definition Language (IDL) and message format. The `.proto` files in the `/proto` files directory contain the service definitions for each MUD service.
64
+ 7. If you're running with the default localhost / `371337` chain, make sure there is a local node running for the chain you want to connect to. For example, a hardhat node or an anvil node.
65
+ 8. Run the MODE service
45
66
 
46
- The benefit of using gRPC + protobuf is the abilitiy to generate both Golang and TypeScript stubs from the service + message definitions in `.proto` files. This way, we define what the service does and what kind of messages it can receive/send only once. We then generated the stubs for whatever language we want to use with the respective client-side or service-side codebase and we do so using the [protocol](https://grpc.io/docs/protoc-installation/) protocol buffer compiler. The generated stubs are placed in the `/protobuf` directory at the root of this repo, and are separated by subdirectories according to the language of the generated stubs. You may expect a directory structure like this
47
-
48
- ```
49
- /protobuf
50
- /go
51
- /ecs-relay
52
- /ecs-snapshot
53
- /ecs-stream
54
- /faucet
55
- /ts
56
- /ecs-relay
57
- /ecs-snapshot
58
- /ecs-stream
59
- /faucet
67
+ ```bash
68
+ ./bin/mode -config config.mode.yaml
60
69
  ```
61
70
 
62
- If you would like to make edits to the service/message definitions in the protobuf files, it's as easy as editing the relevant `.proto` files and re-running the `protoc` command (more on this in "Getting Started"), which will re-generate the stubs for the languages that have been configured (Golang and TypeScript). If you'd like to add more languages, take a look at the linked resources on gRPC + protobufs and make edits to the [`Makefile`](./Makefile).
63
-
64
- #### gRPC-web
65
-
66
- As mentioned earlier, there is an HTTP server that gets run along the gRPC server in order to receive requests from gRPC-web (which are just POST routes). To do this we wrap the gRPC server in a HTTP listener server behind a "proxy". The services use a wrapper Go library to wrap the gRPC server and expose the HTTP server which will listen for gRPC-web requests and do the proxying.
71
+ or
67
72
 
68
- #### grpcurl
69
-
70
- For quick testing or experimentation with the services, we recommend using [grpcurl](https://github.com/fullstorydev/grpcurl). For example, once you build and run the snapshot service locally you can test the endpoint which returns the latest known and computed state to the service like this
71
-
72
- ```
73
- grpcurl -plaintext -d '{"worldAddress": "<WORLD_ADDRESS>"}' localhost:50061 ecssnapshot.ECSStateSnapshotService/GetStateLatest
73
+ ```bash
74
+ make run-mode
74
75
  ```
75
76
 
76
- Note that the port is the gRPC server port and not the HTTP server port, since we are sending a raw gRPC request directly.
77
-
78
- ### [`ecs-snapshot`](./cmd/ecs-snapshot/main.go)
79
-
80
- This service's function is to compute and save the ECS state from the chain via "snapshots", such that a client can perform an initial sync to the ECS world state without having to process all ECS state changes (in the form of events).
81
-
82
- Because every update in MUD ECS is driven by events emitted on the world and triggered by individual component updates, to "catch up" to the "present time", any client needs to process and reduce the events that have been emitted on-chain. While possible to do and reasonable for applications with sparse component updates, once enough time passes (can reason about this as the chain getting "older"), it becomes infeasible and very redundant for every client to perform such a sync by manually reducing events. For example, two clients (even two browser windows on a single machine) would have to perform the same event processing steps in-browser to join a running instance of a deployed on-chain MUD application. Hence, we motivate the job of a snapshot service as a task to "catch" events as they are emitted, parse them out of every block, and reduce them into a state. In this way, the snapshot service effectively computes the "current" world state as it is updated on-chain. Put differently, it "indexes" the events into the state so that clients don't have to, hence the interchangeable use of "indexer" to call the snapshot service.
83
-
84
- The interaction from a client perspective now becomes simpler. If a client needs to sync (as it has to if a new user is attempting to interact with an instance of deployed MUD application), it simply makes a call to an API endpoint that the snapshot service exposes and receives the current state encoded according to a spec over the wire.
85
-
86
- There are multiple endpoints defined in the protobuf file and implemented in the gRPC Go server. For example, you can request the state as a single object via `/GetStateLatest`, but for larger states, there is an endpoint that can chunk the snapshot object according to a variable percentage, `/GetStateLatestStream`. This allows the client to load the state in, for instance, chunks of 1% to reduce the bandwidth load. State growth means that snapshots might get large enough that even a streamed RPC is a bit too much for a web client to handle. For this, there are a number of "pruned" state endpoints that return the snapshot state but with some specific components and their data omitted. Note that these endpoints are experimental and can be tweaked according to specific use cases when dealing with large state growth.
87
-
88
- ### [`ecs-stream`](./cmd/ecs-stream/main.go)
77
+ 9. Optionally, install `grpcurl` to interact with the MODE service API from the command line. For example, on MacOS you can use `brew` to install `grpcurl`:
89
78
 
90
- This service's function is to serve as a multiplexer, subscribing to a feed of data from an EVM-based network and allowing multiple clients to selectively subscribe to subsets of the data that they care about.
91
-
92
- When building with MUD, you're likely to want to know when new blocks are produced and what transactions are included in those blocks since transactions generate state changes that are expressed as ECS events and hence are of interest to the application. One naive way to implement an app's "update" functionality is to "poll" the network at certain time intervals to get up-to-date information. For instance, the client can make an RPC call to a chain such as `eth_getBlockByNumber`. This approach is limiting because it creates unnecessary overhead where clients must initiate requests instead of reacting to state change.
93
-
94
- The stream service provides a flexible way to receive updates and is integrated with MUD to provide specific per-block data, such as all ECS events in that block. The stream service intakes block updates when connected to a network node and makes the data available for multiple consumers. This means that the service consumes data once but makes it available to as many clients as connected to the service. Additionally, the service has a flexible message subscription schema where clients can specify exactly what data they're interested in subscribing to. For example, if a client only cares about what block number it is, it's sufficient to subscribe to the block number only. Clients who also care about the timestamp or the block hash are free to request those when subscribing to the stream.
95
-
96
- The stream service contains a single RPC method called `/SubscribeToStreamLatest` that the clients connect to. We also refer to connected clients on this endpoint as "opening a cursor", since clients, by default, are kept connected and receive updates from the service as a server-side stream until they explicitly disconnect or there's a connection error.
97
-
98
- ### [`ecs-relay`](./cmd/ecs-relay/main.go)
99
-
100
- This service's function is to act as an arbitrary, configurable message relay for data that does not _have_ to go on chain but which an application built with MUD can plug in to utilize seamlessly. The relay service is configurable to support arbitrary messages, messages with signatures, signature verification, and conditions for message relay, such as "do not relay message if balance < threshold" for DDoS prevention.
101
-
102
- The relay works by exposing a system of "topics" and subscriptions/unsubscriptions that clients can opt in and opt-out of depending on interests. On top of the topic system, the relay exposes an endpoint for clients to "push" messages with topics attached to them that are then relayed. Messages are relayed to clients who subscribe to the aforementioned topic, which is done via a different endpoint akin to opening a cursor and listening for relayed events.
103
-
104
- The flow in detail may resemble something like this.
105
-
106
- 1. Client "authenticates" with the service by making RPC on `/Authenticate` endpoint. The client has to identify itself to the service by providing a signature, at which point the public key of the message signer is registered as an identity by the service (which does this by recovering the signer from the signature). If this RPC returns successfully, then the service has registered this client.
107
-
108
- 2. Client subscribes to any labels that it is interested in via the `/Subscribe` endpoint. For example, this can be a recurrent process where the client keeps subscribing / unsubscribing to chunks as the player moves around a map. We needed to "authenticate" first to associate these subscriptions with a given client. This way, the service knows who is sending what. So as part of the request, the "identity" is provided to this RPC by the client in the form of a signature. The service again recovers the signer and checks against known registrations.
109
-
110
- 3. At the same time as subscribing (in another thread, for instance, or something similar), a client opens a cursor to receive events via `/OpenCursor`, again providing a signature to identify itself. This will use any current subscriptions at a given time from step (2) and pipe any messages to a stream. There is a timeout feature designed to disconnect idle clients, so we also need to keep sending a `/Ping` RPC to keep this stream open.
111
-
112
- 4. At this point, steps (2) and (3) are active, `/Subscribe` & `/Unsubscribe` keep being called to update what the client wants to see via the opened cursor, and `/Ping`s are sent to keep the connection alive
113
-
114
- 5. Last but not least, in parallel with all of this, the client most likely needs to send a bunch of stuff to be relayed, so to do that, it uses the `/Push` or `/PushStream` RPC and sends messages with some given label that identifies a topic that others might subscribe to. These labeled "pushes" are then relayed to whoever is subscribed to the labels and has a `/OpenCursor` active, etc., etc., and so on.
115
-
116
- The `main.go` entry point for the relay service contains several command line arguments that can be tweaked to enhance and restrict the message relay flows as desired.
79
+ ```bash
80
+ brew install grpcurl
81
+ ```
117
82
 
118
- ### [`faucet`](./cmd/faucet/main.go)
83
+ 10. MODE exposes a `QueryLayer` gRPC server on port `50091` by default. You can use a gRPC client to interact with the service API. For example, to query for the current state of an indexed MUD world deployed at address `0xff738496c8cd898dC31b670D067162200C5c20A1` and on local chain with ID `371337`, you can use the `GetState` RPC endpoint:
119
84
 
120
- This service's function is to act as a configurable faucet with in-service integrations for MUD transactions. A faucet, by definition, is a service that distributes a pre-set amount of currency on a network limited by a global limit and/or a time limit. For example, a faucet might be able to "drip" 0.01 ETH on a testnet, claimable by the same address no more than once per 12 hours, with a total daily limit of 100 ETH. This service allows you to run a faucet just like this and more.
85
+ ```bash
86
+ grpcurl -plaintext -d '{"chainTables": [], "worldTables": [], "namespace": {"chainId":"371337", "worldAddress": "0xff738496c8cd898dC31b670D067162200C5c20A1"}}' localhost:50091 mode.QueryLayer/GetState
87
+ ```
121
88
 
122
- #### Twitter Verification
89
+ After the initial setup, to quickly re-build and run the MODE service, you can use
123
90
 
124
- The faucet additionally supports verification via Twitter, utilizing the Twitter API and digital signature verification. Note that this requires a Twitter API secret & key that should be obtained from the Twitter Developer portal. A Twitter verification allows you to run a faucet with an extra condition enforced on the ability of your users to claim a "drip". In addition to the time / amount limits, with Twitter verification, a user of your app will have to tweet a valid digital signature to serve as proof of ownership over the address that they are requesting the drip to. In this way, the user "links" the Twitter username with an address and after making an RPC call to verify the tweet, receive a drip. Follow-up requests for a drip from the faucet service do not require extra tweets. Drip limits, time limits, and global ETH emission limits are still enforced the same way as running without Twitter verification.
91
+ ```bash
92
+ make mode run-mode
93
+ ```
125
94
 
126
- #### MUD Transaction Support
95
+ ## V1 Services
127
96
 
128
- The faucet also supports integration with the MUD World contracts and Components and allows you to insert custom code on "drip" events to set MUD Component values. This allows for close integration with your deployed on-chain MUD application. For example, you can build an extended faucet, which accepts drip requests with Twitter verification, and after verifying the signature in the Tweet, sends an on-chain transaction to set a Component value to link the Twitter username and signer address on-chain. This then can allow the client, for instance, a web app, to display the linked Twitter username for the user by getting the state directly from the on-chain state without relying on any server, even the faucet itself.
97
+ ### [📄 Docs](./README.v1.md)
129
98
 
130
- Similarly, as for other services, check out the services `main.go` entry point file for more command-line arguments that can be configured to tweak the configuration of the faucet and turn features on or off.
99
+ The following services are available for use with MUD V1. For more details on each service, see the linked docs page.
131
100
 
132
- ## Getting started
101
+ | Service | Description | Proto / Spec | Default Port |
102
+ | ------------ | :--------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------- | -----------: |
103
+ | ecs-snapshot | Indexer reducing ECS events into a single "current" state for fast snapshot client syncs | [ecs-snapshot.proto](./proto/ecs-snapshot.proto) | 50061 |
104
+ | ecs-stream | Multiplexer for subscriptions to receive current block data, ECS events per block, and transaction origination data | [ecs-stream.proto](./proto/ecs-stream.proto) | 50051 |
105
+ | ecs-relay | Generic message relayer, supporting signed messages, service-side signature verification, relay conditions for DDoS prevention, and more | [ecs-relay.proto](./proto/ecs-relay.proto) | 50071 |
106
+ | faucet | Faucet supporting custom drip amounts, global limits, twitter verification, and integrations with MUD components | [faucet.proto](./proto/faucet.proto) | 50081 |
133
107
 
134
- ### Quickstart
108
+ ### 🏃 Quickstart
135
109
 
136
- The services are written in Go, so to compile and run the service locally you will need Golang installed locally. We use a [`Makefile`](./Makefile) for 'build' & 'run' tasks.
110
+ #### Running the ECS Snapshot, Stream, Relay, and Faucet services
137
111
 
138
112
  1. Install Go
139
-
140
113
  2. Build the source. This will build all the services
141
114
 
142
- ```
115
+ ```bash
143
116
  make build
144
117
  ```
145
118
 
146
- 3. Frequently you'd like to build only a specific service, for example, as you're developing it might not be needed to rebuild all services. For this case the `Makefile` exposes individual commands to build specific service binaries. For example, to build the snapshot service only
119
+ or to build only specific services
147
120
 
148
- ```
149
- make ecs-snapshot
121
+ ```bash
122
+ make ecs-snapshot ecs-stream ecs-relay faucet
150
123
  ```
151
124
 
125
+ 3. If you're running with the default chain, make sure there is a local node running for the chain you want to connect to. For example, a hardhat node or an anvil node.
152
126
  4. Run whichever binary via [`Makefile`](./Makefile). For example, to run the snapshot service
153
127
 
154
- ```
155
- make run-ecs-snapshot WS_URL=<websocket URL>
128
+ ```bash
129
+ make run-ecs-snapshot
156
130
  ```
157
131
 
158
- ### Generating protobuf files
132
+ ## Protobuf
159
133
 
160
- The package has the protobuf files checked in, but in case you want to regenerate those (based on an updated `.proto` file for instance), run
134
+ MUD services use [Protocol Buffers](https://developers.google.com/protocol-buffers) to define the data structures and message schemas. The `.proto` files are available in the `/proto` directory at the root of this repo. For more details about `.proto` files and a language guide, see the [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3). The package has the protobuf files checked in, but in case you want to regenerate those (based on an updated `.proto` file for instance), run
161
135
 
162
- ```
136
+ ```bash
163
137
  make protoc
164
138
  ```
package/README.v1.md ADDED
@@ -0,0 +1,119 @@
1
+ ## Technical Details
2
+
3
+ Every service is a Go stand-alone binary that can be run individually. Entry-points (`main.go` files) for each service can be found linked in each sub-section below.
4
+
5
+ #### General
6
+
7
+ Each service exposes a gRPC server and a wrapper HTPP server (for ability to make gRPC wrapped requests from a web client, e.g. TypeScript MUD client). By default the gRPC server runs at the default `PORT` (specified above and in each `main.go` file) and the HTTP server runs at that `PORT + 1`. For example, the snapshot service has a gRPC server exposed on `50061` and a wrapper server is automatically exposed on `50062`.
8
+
9
+ Each service has specific command-line arguments. Each service requires a connection to an Ethereum node (for same network where your MUD application is deployed on) via a websocket. By default, all websocket connection URL parameters use a `localhost` instance running at port `8545`, so the full URL is `ws://localhost:8545`.
10
+
11
+ #### Dockerfile
12
+
13
+ There are Dockerfiles for each service available at the root of this repo -- `Dockerfile.{faucet|relay|snapshot|stream}`. Note that if you want to modify the Dockerfiles, one thing to make sure of is the exposed port to matching the port that each binary is configured to listen to by default.
14
+
15
+ Each service can be built and used within a Kubernetes cluster (via a resource that can pull the container image) by pushing the images to a container registry. For example, to build the snapshot server via the Dockerfile, we can build the image
16
+
17
+ ```
18
+ docker build -f Dockerfile.snapshot . --tag ghcr.io/latticexyz/mud-ecs-snapshot:<YOUR_TAG>
19
+ ```
20
+
21
+ and then push to the container registry
22
+
23
+ ```
24
+ docker push ghcr.io/latticexyz/mud-ecs-snapshot:<YOUR_TAG>
25
+ ```
26
+
27
+ #### Protobuf
28
+
29
+ We use [Protocol Buffers](https://developers.google.com/protocol-buffers) to define the data structures and message schemas for each service. The `.proto` files are available in the `/proto` directory at the root of this repo -- `/proto/{ecs-relay|ecs-snapshot|ecs-stream|faucet}.proto`. For more details about `.proto` files and a language guide, see the [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3).
30
+
31
+ #### gRPC
32
+
33
+ We use [gRPC](https://grpc.io/docs/what-is-grpc/introduction/) along with protobuf for the complete Interface Definition Language (IDL) and message format. The `.proto` files in the `/proto` files directory contain the service definitions for each MUD service.
34
+
35
+ The benefit of using gRPC + protobuf is the abilitiy to generate both Golang and TypeScript stubs from the service + message definitions in `.proto` files. This way, we define what the service does and what kind of messages it can receive/send only once. We then generated the stubs for whatever language we want to use with the respective client-side or service-side codebase and we do so using the [protocol](https://grpc.io/docs/protoc-installation/) protocol buffer compiler. The generated stubs are placed in the `/protobuf` directory at the root of this repo, and are separated by subdirectories according to the language of the generated stubs. You may expect a directory structure like this
36
+
37
+ ```
38
+ /protobuf
39
+ /go
40
+ /ecs-relay
41
+ /ecs-snapshot
42
+ /ecs-stream
43
+ /faucet
44
+ /ts
45
+ /ecs-relay
46
+ /ecs-snapshot
47
+ /ecs-stream
48
+ /faucet
49
+ ```
50
+
51
+ If you would like to make edits to the service/message definitions in the protobuf files, it's as easy as editing the relevant `.proto` files and re-running the `protoc` command (more on this in "Getting Started"), which will re-generate the stubs for the languages that have been configured (Golang and TypeScript). If you'd like to add more languages, take a look at the linked resources on gRPC + protobufs and make edits to the [`Makefile`](./Makefile).
52
+
53
+ #### gRPC-web
54
+
55
+ As mentioned earlier, there is an HTTP server that gets run along the gRPC server in order to receive requests from gRPC-web (which are just POST routes). To do this we wrap the gRPC server in a HTTP listener server behind a "proxy". The services use a wrapper Go library to wrap the gRPC server and expose the HTTP server which will listen for gRPC-web requests and do the proxying.
56
+
57
+ #### grpcurl
58
+
59
+ For quick testing or experimentation with the services, we recommend using [grpcurl](https://github.com/fullstorydev/grpcurl). For example, once you build and run the snapshot service locally you can test the endpoint which returns the latest known and computed state to the service like this
60
+
61
+ ```
62
+ grpcurl -plaintext -d '{"worldAddress": "<WORLD_ADDRESS>"}' localhost:50061 ecssnapshot.ECSStateSnapshotService/GetStateLatest
63
+ ```
64
+
65
+ Note that the port is the gRPC server port and not the HTTP server port, since we are sending a raw gRPC request directly.
66
+
67
+ ### [`ecs-snapshot`](./cmd/ecs-snapshot/main.go)
68
+
69
+ This service's function is to compute and save the ECS state from the chain via "snapshots", such that a client can perform an initial sync to the ECS world state without having to process all ECS state changes (in the form of events).
70
+
71
+ Because every update in MUD ECS is driven by events emitted on the world and triggered by individual component updates, to "catch up" to the "present time", any client needs to process and reduce the events that have been emitted on-chain. While possible to do and reasonable for applications with sparse component updates, once enough time passes (can reason about this as the chain getting "older"), it becomes infeasible and very redundant for every client to perform such a sync by manually reducing events. For example, two clients (even two browser windows on a single machine) would have to perform the same event processing steps in-browser to join a running instance of a deployed on-chain MUD application. Hence, we motivate the job of a snapshot service as a task to "catch" events as they are emitted, parse them out of every block, and reduce them into a state. In this way, the snapshot service effectively computes the "current" world state as it is updated on-chain. Put differently, it "indexes" the events into the state so that clients don't have to, hence the interchangeable use of "indexer" to call the snapshot service.
72
+
73
+ The interaction from a client perspective now becomes simpler. If a client needs to sync (as it has to if a new user is attempting to interact with an instance of deployed MUD application), it simply makes a call to an API endpoint that the snapshot service exposes and receives the current state encoded according to a spec over the wire.
74
+
75
+ There are multiple endpoints defined in the protobuf file and implemented in the gRPC Go server. For example, you can request the state as a single object via `/GetStateLatest`, but for larger states, there is an endpoint that can chunk the snapshot object according to a variable percentage, `/GetStateLatestStream`. This allows the client to load the state in, for instance, chunks of 1% to reduce the bandwidth load. State growth means that snapshots might get large enough that even a streamed RPC is a bit too much for a web client to handle. For this, there are a number of "pruned" state endpoints that return the snapshot state but with some specific components and their data omitted. Note that these endpoints are experimental and can be tweaked according to specific use cases when dealing with large state growth.
76
+
77
+ ### [`ecs-stream`](./cmd/ecs-stream/main.go)
78
+
79
+ This service's function is to serve as a multiplexer, subscribing to a feed of data from an EVM-based network and allowing multiple clients to selectively subscribe to subsets of the data that they care about.
80
+
81
+ When building with MUD, you're likely to want to know when new blocks are produced and what transactions are included in those blocks since transactions generate state changes that are expressed as ECS events and hence are of interest to the application. One naive way to implement an app's "update" functionality is to "poll" the network at certain time intervals to get up-to-date information. For instance, the client can make an RPC call to a chain such as `eth_getBlockByNumber`. This approach is limiting because it creates unnecessary overhead where clients must initiate requests instead of reacting to state change.
82
+
83
+ The stream service provides a flexible way to receive updates and is integrated with MUD to provide specific per-block data, such as all ECS events in that block. The stream service intakes block updates when connected to a network node and makes the data available for multiple consumers. This means that the service consumes data once but makes it available to as many clients as connected to the service. Additionally, the service has a flexible message subscription schema where clients can specify exactly what data they're interested in subscribing to. For example, if a client only cares about what block number it is, it's sufficient to subscribe to the block number only. Clients who also care about the timestamp or the block hash are free to request those when subscribing to the stream.
84
+
85
+ The stream service contains a single RPC method called `/SubscribeToStreamLatest` that the clients connect to. We also refer to connected clients on this endpoint as "opening a cursor", since clients, by default, are kept connected and receive updates from the service as a server-side stream until they explicitly disconnect or there's a connection error.
86
+
87
+ ### [`ecs-relay`](./cmd/ecs-relay/main.go)
88
+
89
+ This service's function is to act as an arbitrary, configurable message relay for data that does not _have_ to go on chain but which an application built with MUD can plug in to utilize seamlessly. The relay service is configurable to support arbitrary messages, messages with signatures, signature verification, and conditions for message relay, such as "do not relay message if balance < threshold" for DDoS prevention.
90
+
91
+ The relay works by exposing a system of "topics" and subscriptions/unsubscriptions that clients can opt in and opt-out of depending on interests. On top of the topic system, the relay exposes an endpoint for clients to "push" messages with topics attached to them that are then relayed. Messages are relayed to clients who subscribe to the aforementioned topic, which is done via a different endpoint akin to opening a cursor and listening for relayed events.
92
+
93
+ The flow in detail may resemble something like this.
94
+
95
+ 1. Client "authenticates" with the service by making RPC on `/Authenticate` endpoint. The client has to identify itself to the service by providing a signature, at which point the public key of the message signer is registered as an identity by the service (which does this by recovering the signer from the signature). If this RPC returns successfully, then the service has registered this client.
96
+
97
+ 2. Client subscribes to any labels that it is interested in via the `/Subscribe` endpoint. For example, this can be a recurrent process where the client keeps subscribing / unsubscribing to chunks as the player moves around a map. We needed to "authenticate" first to associate these subscriptions with a given client. This way, the service knows who is sending what. So as part of the request, the "identity" is provided to this RPC by the client in the form of a signature. The service again recovers the signer and checks against known registrations.
98
+
99
+ 3. At the same time as subscribing (in another thread, for instance, or something similar), a client opens a cursor to receive events via `/OpenCursor`, again providing a signature to identify itself. This will use any current subscriptions at a given time from step (2) and pipe any messages to a stream. There is a timeout feature designed to disconnect idle clients, so we also need to keep sending a `/Ping` RPC to keep this stream open.
100
+
101
+ 4. At this point, steps (2) and (3) are active, `/Subscribe` & `/Unsubscribe` keep being called to update what the client wants to see via the opened cursor, and `/Ping`s are sent to keep the connection alive
102
+
103
+ 5. Last but not least, in parallel with all of this, the client most likely needs to send a bunch of stuff to be relayed, so to do that, it uses the `/Push` or `/PushStream` RPC and sends messages with some given label that identifies a topic that others might subscribe to. These labeled "pushes" are then relayed to whoever is subscribed to the labels and has a `/OpenCursor` active, etc., etc., and so on.
104
+
105
+ The `main.go` entry point for the relay service contains several command line arguments that can be tweaked to enhance and restrict the message relay flows as desired.
106
+
107
+ ### [`faucet`](./cmd/faucet/main.go)
108
+
109
+ This service's function is to act as a configurable faucet with in-service integrations for MUD transactions. A faucet, by definition, is a service that distributes a pre-set amount of currency on a network limited by a global limit and/or a time limit. For example, a faucet might be able to "drip" 0.01 ETH on a testnet, claimable by the same address no more than once per 12 hours, with a total daily limit of 100 ETH. This service allows you to run a faucet just like this and more.
110
+
111
+ #### Twitter Verification
112
+
113
+ The faucet additionally supports verification via Twitter, utilizing the Twitter API and digital signature verification. Note that this requires a Twitter API secret & key that should be obtained from the Twitter Developer portal. A Twitter verification allows you to run a faucet with an extra condition enforced on the ability of your users to claim a "drip". In addition to the time / amount limits, with Twitter verification, a user of your app will have to tweet a valid digital signature to serve as proof of ownership over the address that they are requesting the drip to. In this way, the user "links" the Twitter username with an address and after making an RPC call to verify the tweet, receive a drip. Follow-up requests for a drip from the faucet service do not require extra tweets. Drip limits, time limits, and global ETH emission limits are still enforced the same way as running without Twitter verification.
114
+
115
+ #### MUD Transaction Support
116
+
117
+ The faucet also supports integration with the MUD World contracts and Components and allows you to insert custom code on "drip" events to set MUD Component values. This allows for close integration with your deployed on-chain MUD application. For example, you can build an extended faucet, which accepts drip requests with Twitter verification, and after verifying the signature in the Tweet, sends an on-chain transaction to set a Component value to link the Twitter username and signer address on-chain. This then can allow the client, for instance, a web app, to display the linked Twitter username for the user by getting the state directly from the on-chain state without relying on any server, even the faucet itself.
118
+
119
+ Similarly, as for other services, check out the services `main.go` entry point file for more command-line arguments that can be configured to tweak the configuration of the faucet and turn features on or off.
package/bin/ecs-relay CHANGED
Binary file
package/bin/ecs-snapshot CHANGED
Binary file
package/bin/ecs-stream CHANGED
Binary file
package/bin/faucet CHANGED
Binary file
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@latticexyz/services",
3
3
  "license": "MIT",
4
- "version": "2.0.0-alpha.0",
4
+ "version": "2.0.0-alpha.1+68a8837a",
5
5
  "description": "MUD services for enhanced interactions with on-chain ECS state",
6
6
  "main": "protobuf/ts/index.ts",
7
7
  "type": "module",
@@ -11,21 +11,20 @@
11
11
  "directory": "packages/services"
12
12
  },
13
13
  "scripts": {
14
- "prepare": "make build",
14
+ "build": "make build",
15
15
  "docs": "rimraf API && mkdir -p _docs/pkg && find pkg -type f -name '*.go' -exec bash -c 'gomarkdoc {} > \"$(dirname _docs/{})\".md' \\; && mv _docs/pkg API && rimraf _docs",
16
16
  "test": "tsc --noEmit && echo 'todo: add tests'",
17
17
  "protoc-ts": "make protoc-ts",
18
- "link": "yarn link",
19
18
  "release": "npm publish --access=public"
20
19
  },
21
20
  "devDependencies": {
22
21
  "rimraf": "^3.0.2",
23
- "ts-proto": "^1.126.1"
22
+ "ts-proto": "^1.146.0"
24
23
  },
25
- "gitHead": "fcb2166c25edd27ead54f0afa1b71d2583939603",
24
+ "gitHead": "68a8837afadedf17cef13328e8b818b068a22765",
26
25
  "dependencies": {
27
26
  "long": "^5.2.1",
28
- "nice-grpc-common": "^2.0.0",
29
- "protobufjs": "^7.1.2"
27
+ "nice-grpc-common": "^2.0.2",
28
+ "protobufjs": "^7.2.3"
30
29
  }
31
30
  }