alpha_omega 1.0.1 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/libexec/shocco.sh DELETED
@@ -1,466 +0,0 @@
1
- #!/bin/sh
2
- # **shocco** is a quick-and-dirty, literate-programming-style documentation
3
- # generator written for and in __POSIX shell__. It borrows liberally from
4
- # [Docco][do], the original Q&D literate-programming-style doc generator.
5
- #
6
- # `shocco(1)` reads shell scripts and produces annotated source documentation
7
- # in HTML format. Comments are formatted with Markdown and presented
8
- # alongside syntax highlighted code so as to give an annotation effect. This
9
- # page is the result of running `shocco` against [its own source file][sh].
10
- #
11
- # shocco is built with `make(1)` and installs under `/usr/local` by default:
12
- #
13
- # git clone git://github.com/rtomayko/shocco.git
14
- # cd shocco
15
- # make
16
- # sudo make install
17
- # # or just copy 'shocco' wherever you need it
18
- #
19
- # Once installed, the `shocco` program can be used to generate documentation
20
- # for a shell script:
21
- #
22
- # shocco shocco.sh
23
- #
24
- # The generated HTML is written to `stdout`.
25
- #
26
- # [do]: http://jashkenas.github.com/docco/
27
- # [sh]: https://github.com/rtomayko/shocco/blob/master/shocco.sh#commit
28
-
29
- # Usage and Prerequisites
30
- # -----------------------
31
-
32
- # The most important line in any shell program.
33
- set -e
34
-
35
- # There's a lot of different ways to do usage messages in shell scripts.
36
- # This is my favorite: you write the usage message in a comment --
37
- # typically right after the shebang line -- *BUT*, use a special comment prefix
38
- # like `#/` so that its easy to pull these lines out.
39
- #
40
- # This also illustrates one of shocco's corner features. Only comment lines
41
- # padded with a space are considered documentation. A `#` followed by any
42
- # other character is considered code.
43
- #
44
- #/ Usage: shocco [-t <title>] [<source>]
45
- #/ Create literate-programming-style documentation for shell scripts.
46
- #/
47
- #/ The shocco program reads a shell script from <source> and writes
48
- #/ generated documentation in HTML format to stdout. When <source> is
49
- #/ '-' or not specified, shocco reads from stdin.
50
-
51
- # This is the second part of the usage message technique: `grep` yourself
52
- # for the usage message comment prefix and then cut off the first few
53
- # characters so that everything lines up.
54
- expr -- "$*" : ".*--help" >/dev/null && {
55
- grep '^#/' <"$0" | cut -c4-
56
- exit 0
57
- }
58
-
59
- # A custom title may be specified with the `-t` option. We use the filename
60
- # as the title if none is given.
61
- test "$1" = '-t' && {
62
- title="$2"
63
- shift;shift
64
- }
65
-
66
- # Next argument should be the `<source>` file. Grab it, and use its basename
67
- # as the title if none was given with the `-t` option.
68
- file="$1"
69
- : ${title:=$(basename "$file")}
70
-
71
- # These are replaced with the full paths to real utilities by the
72
- # configure/make system.
73
- MARKDOWN='Markdown.pl'
74
- PYGMENTIZE='pygmentize'
75
-
76
- # We're going to need a `markdown` command to run comments through. This can
77
- # be [Gruber's `Markdown.pl`][md] (included in the shocco distribution) or
78
- # Discount's super fast `markdown(1)` in C. Try to figure out if either are
79
- # available and then bail if we can't find anything.
80
- #
81
- # [md]: http://daringfireball.net/projects/markdown/
82
- # [ds]: http://www.pell.portland.or.us/~orc/Code/discount/
83
- command -v "$MARKDOWN" >/dev/null || {
84
- if command -v Markdown.pl >/dev/null
85
- then alias markdown='Markdown.pl'
86
- elif test -f "$(dirname $0)/Markdown.pl"
87
- then alias markdown="perl $(dirname $0)/Markdown.pl"
88
- else echo "$(basename $0): markdown command not found." 1>&2
89
- exit 1
90
- fi
91
- }
92
-
93
- # Check that [Pygments][py] is installed for syntax highlighting.
94
- #
95
- # This is a fairly hefty prerequisite. Eventually, I'd like to fallback
96
- # on a simple non-highlighting preformatter when Pygments isn't available. For
97
- # now, just bail out if we can't find the `pygmentize` program.
98
- #
99
- # [py]: http://pygments.org/
100
- command -v "$PYGMENTIZE" >/dev/null || {
101
- echo "$(basename $0): pygmentize command not found." 1>&2
102
- exit 1
103
- }
104
-
105
- # Work and Cleanup
106
- # ----------------
107
-
108
- # Make sure we have a `TMPDIR` set. The `:=` parameter expansion assigns
109
- # the value if `TMPDIR` is unset or null.
110
- : ${TMPDIR:=/tmp}
111
-
112
- # Create a temporary directory for doing work. Use `mktemp(1)` if
113
- # available; but, since `mktemp(1)` is not POSIX specified, fallback on naive
114
- # (and insecure) temp dir generation using the program's basename and pid.
115
- : ${WORK:=$(
116
- if command -v mktemp 1>/dev/null 2>&1
117
- then
118
- mktemp -d "$TMPDIR/$(basename $0).XXXXXXXXXX"
119
- else
120
- dir="$TMPDIR/$(basename $0).$$"
121
- mkdir "$dir"
122
- echo "$dir"
123
- fi
124
- )}
125
-
126
- # We want to be absolutely sure we're not going to do something stupid like
127
- # use `.` or `/` as a work dir. Better safe than sorry.
128
- test -z "$WORK" -o "$WORK" = '/' && {
129
- echo "$(basename $0): could not create a temp work dir."
130
- exit 1
131
- }
132
-
133
- # We're about to create a ton of shit under our `$WORK` directory. Register
134
- # an `EXIT` trap that cleans everything up. This guarantees we don't leave
135
- # anything hanging around unless we're killed with a `SIGKILL`.
136
- trap "rm -rf $WORK" 0
137
-
138
- # Preformatting
139
- # -------------
140
- #
141
- # Start out by applying some light preformatting to the `<source>` file to
142
- # make the code and doc formatting phases a bit easier. The result of this
143
- # pipeline is written to a temp file under the `$WORK` directory so we can
144
- # take a few passes over it.
145
-
146
- # Get a pipeline going with the `<source>` data. We write a single blank
147
- # line at the end of the file to make sure we have an equal number of code/comment
148
- # pairs.
149
- (cat "$file" && printf "\n\n# \n\n") |
150
-
151
- # We want the shebang line and any code preceding the first comment to
152
- # appear as the first code block. This inverts the normal flow of things.
153
- # Usually, we have comment text followed by code; in this case, we have
154
- # code followed by comment text.
155
- #
156
- # Read the first code and docs headers and flip them so the first docs block
157
- # comes before the first code block.
158
- (
159
- lineno=0
160
- codebuf=;codehead=
161
- docsbuf=;docshead=
162
- while read -r line
163
- do
164
- # Issue a warning if the first line of the script is not a shebang
165
- # line. This can screw things up and wreck our attempt at
166
- # flip-flopping the two headings.
167
- lineno=$(( $lineno + 1 ))
168
- test $lineno = 1 && ! expr "$line" : "#!.*" >/dev/null &&
169
- echo "$(basename $0): $(file):1 [warn] shebang! line missing." 1>&2
170
-
171
- # Accumulate comment lines into `$docsbuf` and code lines into
172
- # `$codebuf`. Only lines matching `/#(?: |$)/` are considered doc
173
- # lines.
174
- if expr "$line" : '# ' >/dev/null || test "$line" = "#"
175
- then docsbuf="$docsbuf$line
176
- "
177
- else codebuf="$codebuf$line
178
- "
179
- fi
180
-
181
- # If we have stuff in both `$docsbuf` and `$codebuf`, it means
182
- # we're at some kind of boundary. If `$codehead` isn't set, we're at
183
- # the first comment/doc line, so store the buffer to `$codehead` and
184
- # keep going. If `$codehead` *is* set, we've crossed into another code
185
- # block and are ready to output both blocks and then straight pipe
186
- # everything by `exec`'ing `cat`.
187
- if test -n "$docsbuf" -a -n "$codebuf"
188
- then
189
- if test -n "$codehead"
190
- then docshead="$docsbuf"
191
- docsbuf=""
192
- printf "%s" "$docshead"
193
- printf "%s" "$codehead"
194
- echo "$line"
195
- exec cat
196
- else codehead="$codebuf"
197
- codebuf=
198
- fi
199
- fi
200
- done
201
-
202
- # We made it to the end of the file without a single comment line, or
203
- # there was only a single comment block ending the file. Output our
204
- # docsbuf or a fake comment and then the codebuf or codehead.
205
- echo "${docsbuf:-#}"
206
- echo "${codebuf:-"$codehead"}"
207
- ) |
208
-
209
- # Remove comment leader text from all comment lines. Then prefix all
210
- # comment lines with "DOCS" and interpreted / code lines with "CODE".
211
- # The stream text might look like this after moving through the `sed`
212
- # filters:
213
- #
214
- # CODE #!/bin/sh
215
- # CODE #/ Usage: shocco <file>
216
- # DOCS Docco for and in POSIX shell.
217
- # CODE
218
- # CODE PATH="/bin:/usr/bin"
219
- # CODE
220
- # DOCS Start by numbering all lines in the input file...
221
- # ...
222
- #
223
- # Once we pass through `sed`, save this off in our work directory so
224
- # we can take a few passes over it.
225
- sed -n '
226
- s/^/:/
227
- s/^:[ ]\{0,\}# /DOCS /p
228
- s/^:[ ]\{0,\}#$/DOCS /p
229
- s/^:/CODE /p
230
- ' > "$WORK/raw"
231
-
232
- # Now that we've read and formatted our input file for further parsing,
233
- # change into the work directory. The program will finish up in there.
234
- cd "$WORK"
235
-
236
- # First Pass: Comment Formatting
237
- # ------------------------------
238
-
239
- # Start a pipeline going on our preformatted input.
240
- # Replace all CODE lines with entirely blank lines. We're not interested
241
- # in code right now, other than knowing where comments end and code begins
242
- # and code begins and comments end.
243
- sed 's/^CODE.*//' < raw |
244
-
245
- # Now squeeze multiple blank lines into a single blank line.
246
- #
247
- # __TODO:__ `cat -s` is not POSIX and doesn't squeeze lines on BSD. Use
248
- # the sed line squeezing code mentioned in the POSIX `cat(1)` manual page
249
- # instead.
250
- cat -s |
251
-
252
- # At this point in the pipeline, our stream text looks something like this:
253
- #
254
- # DOCS Now that we've read and formatted ...
255
- # DOCS change into the work directory. The rest ...
256
- # DOCS in there.
257
- #
258
- # DOCS First Pass: Comment Formatting
259
- # DOCS ------------------------------
260
- #
261
- # Blank lines represent code segments. We want to replace all blank lines
262
- # with a dividing marker and remove the "DOCS" prefix from docs lines.
263
- sed '
264
- s/^$/##### DIVIDER/
265
- s/^DOCS //' |
266
-
267
- # The current stream text is suitable for input to `markdown(1)`. It takes
268
- # our doc text with embedded `DIVIDER`s and outputs HTML.
269
- $MARKDOWN |
270
-
271
- # Now this where shit starts to get a little crazy. We use `csplit(1)` to
272
- # split the HTML into a bunch of individual files. The files are named
273
- # as `docs0000`, `docs0001`, `docs0002`, ... Each file includes a single
274
- # doc *section*. These files will sit here while we take a similar pass over
275
- # the source code.
276
- (
277
- csplit -sk \
278
- -f docs \
279
- -n 4 \
280
- - '/<h5>DIVIDER<\/h5>/' '{9999}' \
281
- 2>/dev/null ||
282
- true
283
- )
284
-
285
-
286
- # Second Pass: Code Formatting
287
- # ----------------------------
288
- #
289
- # This is exactly like the first pass but we're focusing on code instead of
290
- # comments. We use the same basic technique to separate the two and isolate
291
- # the code blocks.
292
-
293
- # Get another pipeline going on our performatted input file.
294
- # Replace DOCS lines with blank lines.
295
- sed 's/^DOCS.*//' < raw |
296
-
297
- # Squeeze multiple blank lines into a single blank line.
298
- cat -s |
299
-
300
- # Replace blank lines with a `DIVIDER` marker and remove prefix
301
- # from `CODE` lines.
302
- sed '
303
- s/^$/# DIVIDER/
304
- s/^CODE //' |
305
-
306
- # Now pass the code through `pygmentize` for syntax highlighting. We tell it
307
- # the the input is `sh` and that we want HTML output.
308
- $PYGMENTIZE -l sh -f html -O encoding=utf8 |
309
-
310
- # Post filter the pygments output to remove partial `<pre>` blocks. We add
311
- # these back in at each section when we build the output document.
312
- sed '
313
- s/<div class="highlight"><pre>//
314
- s/^<\/pre><\/div>//' |
315
-
316
- # Again with the `csplit(1)`. Each code section is written to a separate
317
- # file, this time with a `codeXXX` prefix. There should be the same number
318
- # of `codeXXX` files as there are `docsXXX` files.
319
- (
320
- DIVIDER='/<span class="c"># DIVIDER</span>/'
321
- csplit -sk \
322
- -f code \
323
- -n 4 - \
324
- "$DIVIDER" '{9999}' \
325
- 2>/dev/null ||
326
- true
327
- )
328
-
329
- # At this point, we have separate files for each docs section and separate
330
- # files for each code section.
331
-
332
- # HTML Template
333
- # -------------
334
-
335
- # Create a function for apply the standard [Docco][do] HTML layout, using
336
- # [jashkenas][ja]'s gorgeous CSS for styles. Wrapping the layout in a function
337
- # lets us apply it elsewhere simply by piping in a body.
338
- #
339
- # [ja]: http://github.com/jashkenas/
340
- # [do]: http://jashkenas.github.com/docco/
341
- layout () {
342
- cat <<HTML
343
- <!DOCTYPE html>
344
- <html>
345
- <head>
346
- <meta http-eqiv='content-type' content='text/html;charset=utf-8'>
347
- <title>$1</title>
348
- <link rel=stylesheet href="http://jashkenas.github.com/docco/resources/docco.css">
349
- </head>
350
- <body>
351
- <div id=container>
352
- <div id=background></div>
353
- <table cellspacing=0 cellpadding=0>
354
- <thead>
355
- <tr>
356
- <th class=docs><h1>$1</h1></th>
357
- <th class=code></th>
358
- </tr>
359
- </thead>
360
- <tbody>
361
- <tr><td class='docs'>$(cat)</td><td class='code'></td></tr>
362
- </tbody>
363
- </table>
364
- </div>
365
- </body>
366
- </html>
367
- HTML
368
- }
369
-
370
- # Recombining
371
- # -----------
372
-
373
- # Alright, we have separate files for each docs section and separate
374
- # files for each code section. We've defined a function to wrap the
375
- # results in the standard layout. All that's left to do now is put
376
- # everything back together.
377
-
378
- # Before starting the pipeline, decide the order in which to present the
379
- # files. If `code0000` is empty, it should appear first so the remaining
380
- # files are presented `docs0000`, `code0001`, `docs0001`, and so on. If
381
- # `code0000` is not empty, `docs0000` should appear first so the files
382
- # are presented `docs0000`, `code0000`, `docs0001`, `code0001` and so on.
383
- #
384
- # Ultimately, this means that if `code0000` is empty, the `-r` option
385
- # should not be provided with the final `-k` option group to `sort`(1) in
386
- # the pipeline below.
387
- if stat -c"%s" /dev/null >/dev/null 2>/dev/null ; then
388
- # GNU stat
389
- [ "$(stat -c"%s" "code0000")" = 0 ] && sortopt="" || sortopt="r"
390
- else
391
- # BSD stat
392
- [ "$(stat -f"%z" "code0000")" = 0 ] && sortopt="" || sortopt="r"
393
- fi
394
-
395
- # Start the pipeline with a simple list of split out temp filename. One file
396
- # per line.
397
- ls -1 docs[0-9]* code[0-9]* 2>/dev/null |
398
-
399
- # Now sort the list of files by the *number* first and then by the type. The
400
- # list will look something like this when `sort(1)` is done with it:
401
- #
402
- # docs0000
403
- # code0000
404
- # docs0001
405
- # code0001
406
- # docs0002
407
- # code0002
408
- # ...
409
- #
410
- sort -n -k"1.5" -k"1.1$sortopt" |
411
-
412
- # And if we pass those files to `cat(1)` in that order, it concatenates them
413
- # in exactly the way we need. `xargs(1)` reads from `stdin` and passes each
414
- # line of input as a separate argument to the program given.
415
- #
416
- # We could also have written this as:
417
- #
418
- # cat $(ls -1 docs* code* | sort -n -k1.5 -k1.1r)
419
- #
420
- # I like to keep things to a simple flat pipeline when possible, hence the
421
- # `xargs` approach.
422
- xargs cat |
423
-
424
-
425
- # Run a quick substitution on the embedded dividers to turn them into table
426
- # rows and cells. This also wraps each code block in a `<div class=highlight>`
427
- # so that the CSS kicks in properly.
428
- {
429
- DOCSDIVIDER='<h5>DIVIDER</h5>'
430
- DOCSREPLACE='</pre></div></td></tr><tr><td class=docs>'
431
- CODEDIVIDER='<span class="c"># DIVIDER</span>'
432
- CODEREPLACE='</td><td class=code><div class=highlight><pre>'
433
- sed "
434
- s@${DOCSDIVIDER}@${DOCSREPLACE}@
435
- s@${CODEDIVIDER}@${CODEREPLACE}@
436
- "
437
- } |
438
-
439
- # Pipe our recombined HTML into the layout and let it write the result to
440
- # `stdout`.
441
- layout "$title"
442
-
443
- # More
444
- # ----
445
- #
446
- # **shocco** is the third tool in a growing family of quick-and-dirty,
447
- # literate-programming-style documentation generators:
448
- #
449
- # * [Docco][do] - The original. Written in CoffeeScript and generates
450
- # documentation for CoffeeScript, JavaScript, and Ruby.
451
- # * [Rocco][ro] - A port of Docco to Ruby.
452
- #
453
- # If you like this sort of thing, you may also find interesting Knuth's
454
- # massive body of work on literate programming:
455
- #
456
- # * [Knuth: Literate Programming][kn]
457
- # * [Literate Programming on Wikipedia][wi]
458
- #
459
- # [ro]: http://rtomayko.github.com/rocco/
460
- # [do]: http://jashkenas.github.com/docco/
461
- # [kn]: http://www-cs-faculty.stanford.edu/~knuth/lp.html
462
- # [wi]: http://en.wikipedia.org/wiki/Literate_programming
463
-
464
- # Copyright (C) [Ryan Tomayko <tomayko.com/about>](http://tomayko.com/about)<br>
465
- # This is Free Software distributed under the MIT license.
466
- :