@ijonis/geo-lint 0.1.2 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,17 +7,30 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.1.2] - 2026-02-20
11
+
12
+ ### Fixed
13
+ - `--version` flag was hardcoded to `0.1.0` -- now reads dynamically from package.json
14
+ - `--rules` flag crashed without a config file -- now falls back to defaults so users can discover all 92 rules without project setup
15
+
16
+ ### Changed
17
+ - Polished README: 1008 lines → 259 lines with stronger GEO and agentic workflow framing
18
+ - Split reference documentation into 7 dedicated docs/ files (rules, GEO examples, configuration, custom adapters, API, agent integration)
19
+ - Added copy-paste agent prompts for Claude Code, Cursor, Windsurf, and Copilot
20
+ - Updated GitHub repo description, homepage, and topic tags
21
+
22
+ ## [0.1.1] - 2026-02-19
23
+
10
24
  ### Added
11
- - 28 new GEO rules across 4 categories (total: 35 GEO rules, 81 rules overall)
25
+ - 28 new GEO rules across 4 categories (total: 35 GEO rules, 92 rules overall)
12
26
  - **E-E-A-T (8 rules):** source citations, expert quotes, author validation, heading quality, FAQ quality, definition patterns, how-to steps, TL;DR detection
13
27
  - **Structure (7 rules):** section length, paragraph length, list presence, citation block bounds, orphaned intros, heading density, structural element ratio
14
28
  - **Freshness (7 rules):** stale year references, outdated content, passive voice, sentence length, internal links, comparison tables, inline HTML
15
29
  - **RAG Optimization (6 rules):** extraction triggers, section self-containment, vague openings, acronym expansion, statistic context, summary sections
30
+ - 14 content quality rules including readability analysis inspired by Yoast SEO: transition words, consecutive sentence starts, sentence length variety, vocabulary diversity, jargon density
16
31
  - `author` field support in ContentItem and MDX adapter
17
32
  - 6 new GeoConfig options: `fillerPhrases`, `extractionTriggers`, `acronymAllowlist`, `vagueHeadings`, `genericAuthorNames`, `allowedHtmlTags`
18
- - New utility module `geo-advanced-analyzer.ts` with 10 analysis functions
19
- - Extended `geo-analyzer.ts` with 6 new utility functions
20
- - Comprehensive tests for all 28 new rules (~120 tests)
33
+ - Comprehensive tests for all new rules (~120 tests)
21
34
 
22
35
  ## [0.1.0] - 2026-02-18
23
36
 
@@ -30,4 +43,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
30
43
  - MDX/Markdown content adapter with `gray-matter`
31
44
  - CLI with `--format=json`, `--rules`, `--root`, `--config` flags
32
45
 
46
+ [0.1.2]: https://github.com/IJONIS/geo-lint/releases/tag/v0.1.2
47
+ [0.1.1]: https://github.com/IJONIS/geo-lint/releases/tag/v0.1.1
33
48
  [0.1.0]: https://github.com/IJONIS/geo-lint/releases/tag/v0.1.0
package/README.md CHANGED
@@ -12,11 +12,13 @@
12
12
 
13
13
  ## Why this exists
14
14
 
15
- **GEO (Generative Engine Optimization)** is the practice of structuring content so AI search engines cite it -- ChatGPT, Perplexity, Google AI Overviews, Gemini. Traditional SEO gets you into search result lists. GEO gets you **cited in AI-generated answers**. They require different content patterns, and no existing open-source tool checks for GEO.
15
+ I run multiple content-heavy sites and there was no deterministic way to validate whether my content was actually optimized -- not "probably fine," but actually checked against concrete rules. SEO linters exist, but they're either paid SaaS, not automatable, or completely ignore the structural patterns that AI search engines use when deciding what to cite.
16
16
 
17
- `@ijonis/geo-lint` is built for an agentic workflow: your AI agent runs the linter, reads the JSON violations, fixes the content, and re-lints until clean. Every rule ships with a machine-readable `suggestion` and `fixStrategy` that agents consume directly.
17
+ So I built one. **GEO (Generative Engine Optimization)** is the practice of structuring content so it gets cited by ChatGPT, Perplexity, Google AI Overviews, and Gemini. Traditional SEO gets you into search result lists. GEO gets you **cited in AI-generated answers**. Both matter -- and no existing open-source tool checks for GEO.
18
18
 
19
- **92 rules: 35 GEO, 32 SEO, 14 content quality, 8 technical, 3 i18n.** Readability analysis inspired by Yoast SEO. Zero open-source alternatives for the GEO checks.
19
+ The goal was simple: install one tool, point your AI agent at it, and walk away. The agent runs the linter, reads the JSON violations, fixes the content, re-lints until clean -- across an entire site, no manual input. One command, both SEO and GEO validated.
20
+
21
+ **92 rules: 35 GEO, 32 SEO, 14 content quality, 8 technical, 3 i18n.** Readability analysis inspired by Yoast SEO. We researched the current state of GEO and AEO to make sure the rules reflect what actually gets content cited -- not outdated advice.
20
22
 
21
23
  ---
22
24
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ijonis/geo-lint",
3
- "version": "0.1.2",
3
+ "version": "0.1.3",
4
4
  "description": "SEO and GEO (Generative Engine Optimization) linter for Markdown/MDX content — the first open-source linter that checks your content for AI search visibility",
5
5
  "license": "MIT",
6
6
  "author": "IJONIS <hello@ijonis.com> (https://ijonis.com)",