@intlayer/docs 5.8.1 → 6.0.0-canary.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (378) hide show
  1. package/blog/ar/rag_powered_documentation_assistant.md +282 -0
  2. package/blog/de/rag_powered_documentation_assistant.md +282 -0
  3. package/blog/en/rag_powered_documentation_assistant.md +289 -0
  4. package/blog/en-GB/rag_powered_documentation_assistant.md +284 -0
  5. package/blog/es/rag_powered_documentation_assistant.md +308 -0
  6. package/blog/fr/rag_powered_documentation_assistant.md +308 -0
  7. package/blog/hi/rag_powered_documentation_assistant.md +284 -0
  8. package/blog/it/rag_powered_documentation_assistant.md +284 -0
  9. package/blog/ja/rag_powered_documentation_assistant.md +284 -0
  10. package/blog/ko/rag_powered_documentation_assistant.md +283 -0
  11. package/blog/pt/rag_powered_documentation_assistant.md +284 -0
  12. package/blog/ru/rag_powered_documentation_assistant.md +284 -0
  13. package/blog/tr/index.md +69 -0
  14. package/blog/tr/internationalization_and_SEO.md +273 -0
  15. package/blog/tr/intlayer_with_i18next.md +162 -0
  16. package/blog/tr/intlayer_with_next-i18next.md +367 -0
  17. package/blog/tr/intlayer_with_next-intl.md +392 -0
  18. package/blog/tr/intlayer_with_react-i18next.md +346 -0
  19. package/blog/tr/intlayer_with_react-intl.md +345 -0
  20. package/blog/tr/list_i18n_technologies/CMS/drupal.md +143 -0
  21. package/blog/tr/list_i18n_technologies/CMS/wix.md +167 -0
  22. package/blog/tr/list_i18n_technologies/CMS/wordpress.md +188 -0
  23. package/blog/tr/list_i18n_technologies/frameworks/angular.md +125 -0
  24. package/blog/tr/list_i18n_technologies/frameworks/flutter.md +150 -0
  25. package/blog/tr/list_i18n_technologies/frameworks/react-native.md +217 -0
  26. package/blog/tr/list_i18n_technologies/frameworks/react.md +155 -0
  27. package/blog/tr/list_i18n_technologies/frameworks/svelte.md +129 -0
  28. package/blog/tr/list_i18n_technologies/frameworks/vue.md +130 -0
  29. package/blog/tr/next-i18next_vs_next-intl_vs_intlayer.md +170 -0
  30. package/blog/tr/rag_powered_documentation_assistant.md +284 -0
  31. package/blog/tr/react-i18next_vs_react-intl_vs_intlayer.md +162 -0
  32. package/blog/tr/vue-i18n_vs_intlayer.md +276 -0
  33. package/blog/tr/what_is_internationalization.md +166 -0
  34. package/blog/zh/rag_powered_documentation_assistant.md +284 -0
  35. package/dist/cjs/generated/blog.entry.cjs +212 -0
  36. package/dist/cjs/generated/blog.entry.cjs.map +1 -1
  37. package/dist/cjs/generated/docs.entry.cjs +660 -132
  38. package/dist/cjs/generated/docs.entry.cjs.map +1 -1
  39. package/dist/cjs/generated/frequentQuestions.entry.cjs +84 -0
  40. package/dist/cjs/generated/frequentQuestions.entry.cjs.map +1 -1
  41. package/dist/cjs/generated/legal.entry.cjs +6 -0
  42. package/dist/cjs/generated/legal.entry.cjs.map +1 -1
  43. package/dist/esm/generated/blog.entry.mjs +212 -0
  44. package/dist/esm/generated/blog.entry.mjs.map +1 -1
  45. package/dist/esm/generated/docs.entry.mjs +660 -132
  46. package/dist/esm/generated/docs.entry.mjs.map +1 -1
  47. package/dist/esm/generated/frequentQuestions.entry.mjs +84 -0
  48. package/dist/esm/generated/frequentQuestions.entry.mjs.map +1 -1
  49. package/dist/esm/generated/legal.entry.mjs +6 -0
  50. package/dist/esm/generated/legal.entry.mjs.map +1 -1
  51. package/dist/types/generated/blog.entry.d.ts +1 -0
  52. package/dist/types/generated/blog.entry.d.ts.map +1 -1
  53. package/dist/types/generated/docs.entry.d.ts +5 -2
  54. package/dist/types/generated/docs.entry.d.ts.map +1 -1
  55. package/dist/types/generated/frequentQuestions.entry.d.ts.map +1 -1
  56. package/dist/types/generated/legal.entry.d.ts.map +1 -1
  57. package/docs/ar/autoFill.md +41 -40
  58. package/docs/ar/configuration.md +202 -199
  59. package/docs/ar/dictionary/content_file.md +1059 -0
  60. package/docs/ar/intlayer_CMS.md +4 -4
  61. package/docs/ar/intlayer_with_nestjs.md +271 -0
  62. package/docs/ar/intlayer_with_nextjs_page_router.md +1 -1
  63. package/docs/ar/intlayer_with_react_router_v7.md +533 -0
  64. package/docs/ar/intlayer_with_tanstack.md +465 -299
  65. package/docs/ar/intlayer_with_vite+preact.md +7 -7
  66. package/docs/ar/intlayer_with_vite+react.md +7 -7
  67. package/docs/ar/intlayer_with_vite+vue.md +9 -9
  68. package/docs/ar/packages/vite-intlayer/index.md +3 -3
  69. package/docs/ar/readme.md +261 -0
  70. package/docs/ar/testing.md +199 -0
  71. package/docs/de/autoFill.md +42 -19
  72. package/docs/de/configuration.md +155 -147
  73. package/docs/de/dictionary/content_file.md +1059 -0
  74. package/docs/de/intlayer_CMS.md +4 -5
  75. package/docs/de/intlayer_with_nestjs.md +270 -0
  76. package/docs/de/intlayer_with_nextjs_page_router.md +1 -1
  77. package/docs/de/intlayer_with_react_router_v7.md +537 -0
  78. package/docs/de/intlayer_with_tanstack.md +469 -302
  79. package/docs/de/intlayer_with_vite+preact.md +7 -7
  80. package/docs/de/intlayer_with_vite+react.md +7 -7
  81. package/docs/de/intlayer_with_vite+vue.md +9 -9
  82. package/docs/de/packages/vite-intlayer/index.md +3 -3
  83. package/docs/de/readme.md +261 -0
  84. package/docs/de/testing.md +200 -0
  85. package/docs/en/CI_CD.md +4 -6
  86. package/docs/en/autoFill.md +25 -5
  87. package/docs/en/configuration.md +45 -54
  88. package/docs/en/dictionary/content_file.md +1054 -0
  89. package/docs/en/intlayer_CMS.md +8 -7
  90. package/docs/en/intlayer_cli.md +112 -5
  91. package/docs/en/intlayer_with_nestjs.md +268 -0
  92. package/docs/en/intlayer_with_nextjs_page_router.md +1 -1
  93. package/docs/en/intlayer_with_react_router_v7.md +531 -0
  94. package/docs/en/intlayer_with_tanstack.md +463 -294
  95. package/docs/en/intlayer_with_vite+preact.md +8 -8
  96. package/docs/en/intlayer_with_vite+react.md +8 -8
  97. package/docs/en/intlayer_with_vite+vue.md +8 -8
  98. package/docs/en/packages/intlayer/getLocalizedUrl.md +102 -25
  99. package/docs/en/packages/vite-intlayer/index.md +3 -3
  100. package/docs/en/readme.md +261 -0
  101. package/docs/en/testing.md +200 -0
  102. package/docs/en-GB/autoFill.md +29 -6
  103. package/docs/en-GB/configuration.md +79 -71
  104. package/docs/en-GB/dictionary/content_file.md +1084 -0
  105. package/docs/en-GB/intlayer_CMS.md +4 -5
  106. package/docs/en-GB/intlayer_with_nestjs.md +268 -0
  107. package/docs/en-GB/intlayer_with_nextjs_page_router.md +1 -1
  108. package/docs/en-GB/intlayer_with_react_router_v7.md +533 -0
  109. package/docs/en-GB/intlayer_with_tanstack.md +466 -299
  110. package/docs/en-GB/intlayer_with_vite+preact.md +7 -7
  111. package/docs/en-GB/intlayer_with_vite+react.md +7 -7
  112. package/docs/en-GB/intlayer_with_vite+vue.md +9 -9
  113. package/docs/en-GB/packages/vite-intlayer/index.md +3 -3
  114. package/docs/en-GB/readme.md +261 -0
  115. package/docs/en-GB/testing.md +200 -0
  116. package/docs/es/autoFill.md +45 -23
  117. package/docs/es/configuration.md +171 -167
  118. package/docs/es/dictionary/content_file.md +1088 -0
  119. package/docs/es/intlayer_CMS.md +4 -5
  120. package/docs/es/intlayer_with_nestjs.md +268 -0
  121. package/docs/es/intlayer_with_nextjs_page_router.md +1 -1
  122. package/docs/es/intlayer_with_react_router_v7.md +533 -0
  123. package/docs/es/intlayer_with_tanstack.md +469 -280
  124. package/docs/es/intlayer_with_vite+preact.md +7 -7
  125. package/docs/es/intlayer_with_vite+react.md +7 -7
  126. package/docs/es/intlayer_with_vite+vue.md +9 -9
  127. package/docs/es/packages/vite-intlayer/index.md +3 -3
  128. package/docs/es/readme.md +261 -0
  129. package/docs/es/testing.md +200 -0
  130. package/docs/fr/autoFill.md +47 -24
  131. package/docs/fr/configuration.md +213 -198
  132. package/docs/fr/dictionary/content_file.md +1054 -0
  133. package/docs/fr/intlayer_CMS.md +4 -5
  134. package/docs/fr/intlayer_with_nestjs.md +268 -0
  135. package/docs/fr/intlayer_with_nextjs_page_router.md +1 -1
  136. package/docs/fr/intlayer_with_react_router_v7.md +549 -0
  137. package/docs/fr/intlayer_with_tanstack.md +465 -279
  138. package/docs/fr/intlayer_with_vite+preact.md +7 -7
  139. package/docs/fr/intlayer_with_vite+react.md +7 -7
  140. package/docs/fr/intlayer_with_vite+vue.md +9 -9
  141. package/docs/fr/packages/vite-intlayer/index.md +3 -3
  142. package/docs/fr/readme.md +261 -0
  143. package/docs/fr/testing.md +200 -0
  144. package/docs/hi/autoFill.md +47 -25
  145. package/docs/hi/configuration.md +194 -189
  146. package/docs/hi/dictionary/content_file.md +1056 -0
  147. package/docs/hi/intlayer_CMS.md +4 -5
  148. package/docs/hi/intlayer_with_nestjs.md +269 -0
  149. package/docs/hi/intlayer_with_nextjs_page_router.md +1 -1
  150. package/docs/hi/intlayer_with_react_router_v7.md +533 -0
  151. package/docs/hi/intlayer_with_tanstack.md +467 -282
  152. package/docs/hi/intlayer_with_vite+preact.md +7 -7
  153. package/docs/hi/intlayer_with_vite+react.md +7 -7
  154. package/docs/hi/intlayer_with_vite+vue.md +9 -9
  155. package/docs/hi/packages/vite-intlayer/index.md +3 -3
  156. package/docs/hi/readme.md +261 -0
  157. package/docs/hi/testing.md +200 -0
  158. package/docs/it/autoFill.md +46 -24
  159. package/docs/it/configuration.md +169 -161
  160. package/docs/it/dictionary/content_file.md +1061 -0
  161. package/docs/it/intlayer_CMS.md +4 -5
  162. package/docs/it/intlayer_with_nestjs.md +268 -0
  163. package/docs/it/intlayer_with_nextjs_page_router.md +1 -1
  164. package/docs/it/intlayer_with_react_router_v7.md +535 -0
  165. package/docs/it/intlayer_with_tanstack.md +467 -301
  166. package/docs/it/intlayer_with_vite+preact.md +7 -7
  167. package/docs/it/intlayer_with_vite+react.md +7 -7
  168. package/docs/it/intlayer_with_vite+vue.md +9 -9
  169. package/docs/it/packages/vite-intlayer/index.md +3 -3
  170. package/docs/it/readme.md +261 -0
  171. package/docs/it/testing.md +200 -0
  172. package/docs/ja/autoFill.md +45 -23
  173. package/docs/ja/configuration.md +243 -204
  174. package/docs/ja/dictionary/content_file.md +1064 -0
  175. package/docs/ja/intlayer_CMS.md +4 -5
  176. package/docs/ja/intlayer_with_nestjs.md +268 -0
  177. package/docs/ja/intlayer_with_nextjs_page_router.md +1 -1
  178. package/docs/ja/intlayer_with_react_router_v7.md +534 -0
  179. package/docs/ja/intlayer_with_tanstack.md +467 -303
  180. package/docs/ja/intlayer_with_vite+preact.md +7 -7
  181. package/docs/ja/intlayer_with_vite+react.md +7 -7
  182. package/docs/ja/intlayer_with_vite+vue.md +9 -9
  183. package/docs/ja/packages/vite-intlayer/index.md +3 -3
  184. package/docs/ja/readme.md +263 -0
  185. package/docs/ja/testing.md +200 -0
  186. package/docs/ko/autoFill.md +39 -16
  187. package/docs/ko/configuration.md +217 -197
  188. package/docs/ko/dictionary/content_file.md +1060 -0
  189. package/docs/ko/intlayer_CMS.md +4 -5
  190. package/docs/ko/intlayer_with_nestjs.md +268 -0
  191. package/docs/ko/intlayer_with_nextjs_page_router.md +1 -1
  192. package/docs/ko/intlayer_with_react_router_v7.md +540 -0
  193. package/docs/ko/intlayer_with_tanstack.md +466 -302
  194. package/docs/ko/intlayer_with_vite+preact.md +7 -7
  195. package/docs/ko/intlayer_with_vite+react.md +7 -7
  196. package/docs/ko/intlayer_with_vite+vue.md +9 -9
  197. package/docs/ko/packages/vite-intlayer/index.md +3 -3
  198. package/docs/ko/readme.md +261 -0
  199. package/docs/ko/testing.md +200 -0
  200. package/docs/pt/autoFill.md +39 -15
  201. package/docs/pt/configuration.md +165 -147
  202. package/docs/pt/dictionary/content_file.md +1062 -0
  203. package/docs/pt/intlayer_CMS.md +4 -5
  204. package/docs/pt/intlayer_with_nestjs.md +271 -0
  205. package/docs/pt/intlayer_with_nextjs_page_router.md +1 -1
  206. package/docs/pt/intlayer_with_react_router_v7.md +535 -0
  207. package/docs/pt/intlayer_with_tanstack.md +469 -300
  208. package/docs/pt/intlayer_with_vite+preact.md +7 -7
  209. package/docs/pt/intlayer_with_vite+react.md +7 -7
  210. package/docs/pt/intlayer_with_vite+vue.md +9 -9
  211. package/docs/pt/packages/vite-intlayer/index.md +3 -3
  212. package/docs/pt/readme.md +261 -0
  213. package/docs/pt/testing.md +200 -0
  214. package/docs/ru/autoFill.md +52 -30
  215. package/docs/ru/configuration.md +164 -117
  216. package/docs/ru/dictionary/content_file.md +1064 -0
  217. package/docs/ru/intlayer_CMS.md +4 -4
  218. package/docs/ru/intlayer_with_nestjs.md +270 -0
  219. package/docs/ru/intlayer_with_nextjs_page_router.md +1 -1
  220. package/docs/ru/intlayer_with_react_router_v7.md +534 -0
  221. package/docs/ru/intlayer_with_tanstack.md +470 -305
  222. package/docs/ru/intlayer_with_vite+preact.md +7 -7
  223. package/docs/ru/intlayer_with_vite+react.md +7 -7
  224. package/docs/ru/intlayer_with_vite+vue.md +9 -9
  225. package/docs/ru/packages/vite-intlayer/index.md +3 -3
  226. package/docs/ru/readme.md +261 -0
  227. package/docs/ru/testing.md +202 -0
  228. package/docs/tr/CI_CD.md +198 -0
  229. package/docs/tr/autoFill.md +201 -0
  230. package/docs/tr/configuration.md +585 -0
  231. package/docs/tr/dictionary/condition.md +243 -0
  232. package/docs/tr/dictionary/content_file.md +1055 -0
  233. package/docs/tr/dictionary/enumeration.md +251 -0
  234. package/docs/tr/dictionary/file.md +228 -0
  235. package/docs/tr/dictionary/function_fetching.md +218 -0
  236. package/docs/tr/dictionary/gender.md +279 -0
  237. package/docs/tr/dictionary/insertion.md +191 -0
  238. package/docs/tr/dictionary/markdown.md +385 -0
  239. package/docs/tr/dictionary/nesting.md +279 -0
  240. package/docs/tr/dictionary/translation.md +315 -0
  241. package/docs/tr/formatters.md +618 -0
  242. package/docs/tr/how_works_intlayer.md +254 -0
  243. package/docs/tr/index.md +168 -0
  244. package/docs/tr/interest_of_intlayer.md +288 -0
  245. package/docs/tr/intlayer_CMS.md +347 -0
  246. package/docs/tr/intlayer_cli.md +570 -0
  247. package/docs/tr/intlayer_visual_editor.md +269 -0
  248. package/docs/tr/intlayer_with_angular.md +694 -0
  249. package/docs/tr/intlayer_with_create_react_app.md +1218 -0
  250. package/docs/tr/intlayer_with_express.md +415 -0
  251. package/docs/tr/intlayer_with_lynx+react.md +511 -0
  252. package/docs/tr/intlayer_with_nestjs.md +268 -0
  253. package/docs/tr/intlayer_with_nextjs_14.md +1029 -0
  254. package/docs/tr/intlayer_with_nextjs_15.md +1506 -0
  255. package/docs/tr/intlayer_with_nextjs_page_router.md +1484 -0
  256. package/docs/tr/intlayer_with_nuxt.md +773 -0
  257. package/docs/tr/intlayer_with_react_native+expo.md +660 -0
  258. package/docs/tr/intlayer_with_react_router_v7.md +531 -0
  259. package/docs/tr/intlayer_with_tanstack.md +452 -0
  260. package/docs/tr/intlayer_with_vite+preact.md +1673 -0
  261. package/docs/tr/intlayer_with_vite+react.md +1632 -0
  262. package/docs/tr/intlayer_with_vite+solid.md +288 -0
  263. package/docs/tr/intlayer_with_vite+svelte.md +288 -0
  264. package/docs/tr/intlayer_with_vite+vue.md +1042 -0
  265. package/docs/tr/introduction.md +209 -0
  266. package/docs/tr/locale_mapper.md +244 -0
  267. package/docs/tr/mcp_server.md +207 -0
  268. package/docs/tr/packages/@intlayer/api/index.md +58 -0
  269. package/docs/tr/packages/@intlayer/chokidar/index.md +57 -0
  270. package/docs/tr/packages/@intlayer/cli/index.md +47 -0
  271. package/docs/tr/packages/@intlayer/config/index.md +142 -0
  272. package/docs/tr/packages/@intlayer/core/index.md +51 -0
  273. package/docs/tr/packages/@intlayer/design-system/index.md +47 -0
  274. package/docs/tr/packages/@intlayer/dictionary-entry/index.md +53 -0
  275. package/docs/tr/packages/@intlayer/editor/index.md +47 -0
  276. package/docs/tr/packages/@intlayer/editor-react/index.md +47 -0
  277. package/docs/tr/packages/@intlayer/webpack/index.md +61 -0
  278. package/docs/tr/packages/angular-intlayer/index.md +59 -0
  279. package/docs/tr/packages/express-intlayer/index.md +258 -0
  280. package/docs/tr/packages/express-intlayer/t.md +459 -0
  281. package/docs/tr/packages/intlayer/getConfiguration.md +151 -0
  282. package/docs/tr/packages/intlayer/getEnumeration.md +165 -0
  283. package/docs/tr/packages/intlayer/getHTMLTextDir.md +127 -0
  284. package/docs/tr/packages/intlayer/getLocaleLang.md +87 -0
  285. package/docs/tr/packages/intlayer/getLocaleName.md +124 -0
  286. package/docs/tr/packages/intlayer/getLocalizedUrl.md +324 -0
  287. package/docs/tr/packages/intlayer/getMultilingualUrls.md +225 -0
  288. package/docs/tr/packages/intlayer/getPathWithoutLocale.md +81 -0
  289. package/docs/tr/packages/intlayer/getTranslation.md +196 -0
  290. package/docs/tr/packages/intlayer/getTranslationContent.md +195 -0
  291. package/docs/tr/packages/intlayer/index.md +505 -0
  292. package/docs/tr/packages/intlayer-cli/index.md +71 -0
  293. package/docs/tr/packages/intlayer-editor/index.md +139 -0
  294. package/docs/tr/packages/lynx-intlayer/index.md +85 -0
  295. package/docs/tr/packages/next-intlayer/index.md +154 -0
  296. package/docs/tr/packages/next-intlayer/t.md +354 -0
  297. package/docs/tr/packages/next-intlayer/useDictionary.md +270 -0
  298. package/docs/tr/packages/next-intlayer/useIntlayer.md +265 -0
  299. package/docs/tr/packages/next-intlayer/useLocale.md +133 -0
  300. package/docs/tr/packages/nuxt-intlayer/index.md +59 -0
  301. package/docs/tr/packages/preact-intlayer/index.md +55 -0
  302. package/docs/tr/packages/react-intlayer/index.md +148 -0
  303. package/docs/tr/packages/react-intlayer/t.md +304 -0
  304. package/docs/tr/packages/react-intlayer/useDictionary.md +554 -0
  305. package/docs/tr/packages/react-intlayer/useI18n.md +478 -0
  306. package/docs/tr/packages/react-intlayer/useIntlayer.md +253 -0
  307. package/docs/tr/packages/react-intlayer/useLocale.md +212 -0
  308. package/docs/tr/packages/react-native-intlayer/index.md +85 -0
  309. package/docs/tr/packages/react-scripts-intlayer/index.md +82 -0
  310. package/docs/tr/packages/solid-intlayer/index.md +56 -0
  311. package/docs/tr/packages/svelte-intlayer/index.md +55 -0
  312. package/docs/tr/packages/vite-intlayer/index.md +82 -0
  313. package/docs/tr/packages/vue-intlayer/index.md +59 -0
  314. package/docs/tr/per_locale_file.md +321 -0
  315. package/docs/tr/readme.md +261 -0
  316. package/docs/tr/roadmap.md +338 -0
  317. package/docs/tr/testing.md +200 -0
  318. package/docs/tr/vs_code_extension.md +154 -0
  319. package/docs/zh/autoFill.md +40 -18
  320. package/docs/zh/configuration.md +245 -226
  321. package/docs/zh/dictionary/content_file.md +1064 -0
  322. package/docs/zh/intlayer_CMS.md +4 -5
  323. package/docs/zh/intlayer_with_nestjs.md +268 -0
  324. package/docs/zh/intlayer_with_nextjs_page_router.md +1 -1
  325. package/docs/zh/intlayer_with_react_router_v7.md +535 -0
  326. package/docs/zh/intlayer_with_tanstack.md +468 -278
  327. package/docs/zh/intlayer_with_vite+preact.md +7 -7
  328. package/docs/zh/intlayer_with_vite+react.md +7 -7
  329. package/docs/zh/intlayer_with_vite+vue.md +7 -7
  330. package/docs/zh/packages/vite-intlayer/index.md +3 -3
  331. package/docs/zh/readme.md +261 -0
  332. package/docs/zh/testing.md +198 -0
  333. package/frequent_questions/tr/SSR_Next_no_[locale].md +105 -0
  334. package/frequent_questions/tr/array_as_content_declaration.md +72 -0
  335. package/frequent_questions/tr/build_dictionaries.md +59 -0
  336. package/frequent_questions/tr/build_error_CI_CD.md +75 -0
  337. package/frequent_questions/tr/customized_locale_list.md +65 -0
  338. package/frequent_questions/tr/domain_routing.md +114 -0
  339. package/frequent_questions/tr/esbuild_error.md +30 -0
  340. package/frequent_questions/tr/get_locale_cookie.md +142 -0
  341. package/frequent_questions/tr/intlayer_command_undefined.md +156 -0
  342. package/frequent_questions/tr/locale_incorect_in_url.md +74 -0
  343. package/frequent_questions/tr/static_rendering.md +45 -0
  344. package/frequent_questions/tr/translated_path_url.md +56 -0
  345. package/frequent_questions/tr/unknown_command.md +98 -0
  346. package/legal/tr/privacy_notice.md +83 -0
  347. package/legal/tr/terms_of_service.md +55 -0
  348. package/package.json +12 -12
  349. package/src/generated/blog.entry.ts +212 -0
  350. package/src/generated/docs.entry.ts +663 -135
  351. package/src/generated/frequentQuestions.entry.ts +85 -1
  352. package/src/generated/legal.entry.ts +7 -1
  353. package/docs/ar/dictionary/content_extention_customization.md +0 -100
  354. package/docs/ar/dictionary/get_started.md +0 -527
  355. package/docs/de/dictionary/content_extention_customization.md +0 -100
  356. package/docs/de/dictionary/get_started.md +0 -531
  357. package/docs/en/dictionary/content_extention_customization.md +0 -102
  358. package/docs/en/dictionary/get_started.md +0 -529
  359. package/docs/en-GB/dictionary/content_extention_customization.md +0 -100
  360. package/docs/en-GB/dictionary/get_started.md +0 -591
  361. package/docs/es/dictionary/content_extention_customization.md +0 -100
  362. package/docs/es/dictionary/get_started.md +0 -527
  363. package/docs/fr/dictionary/content_extention_customization.md +0 -100
  364. package/docs/fr/dictionary/get_started.md +0 -527
  365. package/docs/hi/dictionary/content_extention_customization.md +0 -100
  366. package/docs/hi/dictionary/get_started.md +0 -527
  367. package/docs/it/dictionary/content_extention_customization.md +0 -113
  368. package/docs/it/dictionary/get_started.md +0 -573
  369. package/docs/ja/dictionary/content_extention_customization.md +0 -113
  370. package/docs/ja/dictionary/get_started.md +0 -576
  371. package/docs/ko/dictionary/content_extention_customization.md +0 -100
  372. package/docs/ko/dictionary/get_started.md +0 -530
  373. package/docs/pt/dictionary/content_extention_customization.md +0 -100
  374. package/docs/pt/dictionary/get_started.md +0 -532
  375. package/docs/ru/dictionary/content_extention_customization.md +0 -100
  376. package/docs/ru/dictionary/get_started.md +0 -575
  377. package/docs/zh/dictionary/content_extention_customization.md +0 -117
  378. package/docs/zh/dictionary/get_started.md +0 -533
@@ -0,0 +1,289 @@
1
+ ---
2
+ createdAt: 2025-09-10
3
+ updatedAt: 2025-09-10
4
+ title: Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
5
+ description: Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
6
+ keywords:
7
+ - RAG
8
+ - Documentation
9
+ - Assistant
10
+ - Chunking
11
+ - Embeddings
12
+ - Search
13
+ slugs:
14
+ - blog
15
+ - rag-powered-documentation-assistant
16
+ ---
17
+
18
+ # Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
19
+
20
+ ## What you get
21
+
22
+ I built a RAG-powered documentation assistant and packaged it into a boilerplate you can use immediately.
23
+
24
+ - Comes with a ready-to-use application (Next.js + OpenAI API)
25
+ - Includes a working RAG pipeline (chunking, embeddings, cosine similarity)
26
+ - Provides a complete chatbot UI built in React
27
+ - All UI components are fully editable with Tailwind CSS
28
+ - Logs every user query to help identify missing docs, user pain points, and product opportunities
29
+
30
+ 👉 [Live demo](https://intlayer.org/doc/why) 👉 [Code boilerplate](https://github.com/aymericzip/smart_doc_RAG)
31
+
32
+ ## Introduction
33
+
34
+ If you’ve ever been lost in documentation, scrolling endlessly for one answer, you know how painful it can be. Docs are useful, but they’re static and searching them often feels clunky.
35
+
36
+ That’s where **RAG (Retrieval-Augmented Generation)** comes in. Instead of forcing users to dig through text, we can combine **retrieval** (finding the right parts of the doc) with **generation** (letting an LLM explain it naturally).
37
+
38
+ In this post, I’ll walk you through how I built a RAG-powered documentation chatbot and how it doesn’t just help users find answers faster, but also gives product teams a new way to understand user pain points.
39
+
40
+ ## Why Use RAG for Documentation?
41
+
42
+ RAG has become a popular approach for a reason: it’s one of the most practical ways to make large language models actually useful.
43
+
44
+ For documentation, the benefits are clear:
45
+
46
+ - Instant answers: users ask in natural language, and get relevant replies.
47
+ - Better context: the model only sees the most relevant doc sections, reducing hallucinations.
48
+ - Search that feels human: more like Algolia + FAQ + chatbot, rolled into one.
49
+ - Feedback loop: by storing queries, you uncover what users really struggle with.
50
+
51
+ That last point is crucial. A RAG system doesn’t just answer questions, it tells you what people are asking. That means:
52
+
53
+ - You discover missing info in your docs.
54
+ - You see feature requests emerging.
55
+ - You spot patterns that can even guide product strategy.
56
+
57
+ So, RAG isn’t just a support tool. It’s also a **product discovery engine**.
58
+
59
+ ## How the RAG Pipeline Works
60
+
61
+ ![RAG Pipeline](https://github.com/aymericzip/intlayer/blob/main/docs/assets/rag_flow.svg)
62
+
63
+ At a high level, here’s the recipe I used:
64
+
65
+ 1. **Chunking the documentation** Large Markdown files are split into chunks. Chunking allows to provide as context only the relevant parts of the documentation.
66
+ 2. **Generating embeddings** Each chunk is turned into a vector using OpenAI’s embedding API (text-embedding-3-large) or a vector database (Chroma, Qdrant, Pinecone).
67
+ 3. **Indexing & storing** Embeddings are stored in a simple JSON file (for my demo), but in production, you’d likely use a vector DB.
68
+ 4. **Retrieval (R in RAG)** A user query is embedded, cosine similarity is computed, and the top-matching chunks are retrieved.
69
+ 5. **Augmentation + Generation (AG in RAG)** Those chunks are injected into the prompt for ChatGPT, so the model answers with actual doc context.
70
+ 6. **Logging queries for feedback** Every user query is stored. This is gold for understanding pain points, missing docs, or new opportunities.
71
+
72
+ ## Step 1: Reading the Docs
73
+
74
+ The first step was simple: I needed a way to scan a docs/ folder for all .md files. Using Node.js and glob, I fetched the content of each Markdown file into memory.
75
+
76
+ This keeps the pipeline flexible: instead of Markdown, you could fetch docs from a database, a CMS, or even an API.
77
+
78
+ ## Step 2: Chunking the Documentation
79
+
80
+ Why chunk? Because language models have **context limits**. Feeding them an entire book of docs won’t work.
81
+
82
+ So the idea is to break text into manageable chunks (e.g. 500 tokens each) with overlap (e.g. 100 tokens). Overlap ensures continuity so you don’t lose meaning at chunk boundaries.
83
+
84
+ <p align="center">
85
+ <img width="480" alt="Reliable data source" src="https://github.com/user-attachments/assets/ee548851-7206-4cc6-821e-de8a4366c6a3" />
86
+ </p>
87
+
88
+
89
+ **Example:**
90
+
91
+ - Chunk 1 → “…the old library that many had forgotten. Its towering shelves were filled with books…”
92
+ - Chunk 2 → “…shelves were filled with books from every imaginable genre, each whispering stories…”
93
+
94
+ The overlap ensures both chunks contain shared context, so retrieval remains coherent.
95
+
96
+ This trade-off (chunk size vs overlap) is key for RAG efficiency:
97
+
98
+ - Too small → you get noise.
99
+ - Too large → you blow up context size.
100
+
101
+ ## Step 3: Generating Embeddings
102
+
103
+ Once the docs are chunked, we generate **embeddings** — high-dimensional vectors representing each chunk.
104
+
105
+ I used OpenAI’s text-embedding-3-large model, but you could use any modern embedding model.
106
+
107
+ **Example embedding:**
108
+
109
+ ```js
110
+ [
111
+ -0.0002630692, -0.029749284, 0.010225477, -0.009224428, -0.0065269712,
112
+ -0.002665544, 0.003214777, 0.04235309, -0.033162255, -0.00080789323,
113
+ //...+1533 elements
114
+ ];
115
+ ```
116
+
117
+ Each vector is a mathematical fingerprint of the text, enabling similarity search.
118
+
119
+ ## Step 4: Indexing & Storing Embeddings
120
+
121
+ To avoid regenerating embeddings multiple times, I stored them in embeddings.json.
122
+
123
+ In production, you’d likely want a vector database such as:
124
+
125
+ - Chroma
126
+ - Qdrant
127
+ - Pinecone
128
+ - FAISS, Weaviate, Milvus, etc.
129
+
130
+ Vector DBs handle indexing, scalability, and fast search. But for my prototype, a local JSON worked fine.
131
+
132
+ ## Step 5: Retrieval with Cosine Similarity
133
+
134
+ When a user asks a question:
135
+
136
+ 1. Generate an embedding for the query.
137
+ 2. Compare it to all doc embeddings using **cosine similarity**.
138
+ 3. Keep only the top N most similar chunks.
139
+
140
+ Cosine similarity measures the angle between two vectors. A perfect match scores **1.0**.
141
+
142
+ This way, the system finds the closest doc passages to the query.
143
+
144
+ ## Step 6: Augmentation + Generation
145
+
146
+ Now comes the magic. We take the top chunks and inject them into the **system prompt** for ChatGPT.
147
+
148
+ That means the model answers as if those chunks were part of the conversation.
149
+
150
+ The result: accurate, **doc-grounded responses**.
151
+
152
+ ## Step 7: Logging User Queries
153
+
154
+ This is the hidden superpower.
155
+
156
+ Every question asked is stored. Over time, you build a dataset of:
157
+
158
+ - Most frequent questions (great for FAQs)
159
+ - Unanswered questions (docs are missing or unclear)
160
+ - Feature requests disguised as questions (“Does it integrate with X?”)
161
+ - Emerging use cases you hadn’t planned for
162
+
163
+ This turns your RAG assistant into a **continuous user research tool**.
164
+
165
+ ## What Does It Cost?
166
+
167
+ One common objection to RAG is cost. In practice, it’s surprisingly cheap:
168
+
169
+ - Generating embeddings for ~200 docs takes about **5 minutes** and costs **1–2 euros**.
170
+ - The searching doc feature is 100% free.
171
+ - For queries, we use gpt-4o-latest without “thinking” mode. On Intlayer, we see around **300 chats queries per month**, and the OpenAI API bill rarely exceeds **$10**.
172
+
173
+ On the top of that, you can include the hosting cost.
174
+
175
+ ## Implementation Details
176
+
177
+ Stack:
178
+
179
+ - Monorepo: pnpm workspace
180
+ - Doc package: Node.js / TypeScript / OpenAI API
181
+ - Frontend: Next.js / React / Tailwind CSS
182
+ - Backend: Node.js API route / OpenAI API
183
+
184
+ The `@smart-doc/docs` package is a TypeScript package that handles documentation processing. When a markdown file is added or modified, the package includes a `build` script that rebuilds the documentation list in each language, generates embeddings, and stores them in an `embeddings.json` file.
185
+
186
+ For the frontend, we use a Next.js application that provides:
187
+
188
+ - Markdown to HTML rendering
189
+ - A search bar to find relevant documentation
190
+ - A chatbot interface for asking questions about the docs
191
+
192
+ To perform a documentation search, the Next.js application includes an API route that calls a function in the `@smart-doc/docs` package to retrieve doc chunks matching the query. Using these chunks, we can return a list of documentation pages relevant to the user's search.
193
+
194
+ For the chatbot functionality, we follow the same search process but additionally inject the retrieved doc chunks into the prompt sent to ChatGPT.
195
+
196
+ Here's an example of a prompt sent to ChatGPT:
197
+
198
+ System prompt :
199
+
200
+ ```txt
201
+ You are a helpful assistant that can answer questions about the Intlayer documentation.
202
+
203
+ Related chunks :
204
+
205
+ -----
206
+ docName: "getting-started"
207
+ docChunk: "1/3"
208
+ docUrl: "https://example.com/docs/en/getting-started"
209
+ ---
210
+
211
+ # How to get started
212
+
213
+ ...
214
+
215
+ -----
216
+ docName: "another-doc"
217
+ docChunk: "1/5"
218
+ docUrl: "https://example.com/docs/en/another-doc"
219
+ ---
220
+
221
+ # Another doc
222
+
223
+ ...
224
+ ```
225
+
226
+ User query :
227
+
228
+ ```txt
229
+ How to get started?
230
+ ```
231
+
232
+ We use SSE to stream the response from the API route.
233
+
234
+ As mentioned, we use gpt-4-turbo without "thinking" mode. Responses are relevant, and latency is low.
235
+ We experimented with gpt-5, but latency was too high (sometimes up to 15 seconds for a reply). But we’ll revisit that in the future.
236
+
237
+ 👉 [Try the demo here](https://intlayer.org/doc/why) 👉 [Check the code template on GitHub](https://github.com/aymericzip/smart_doc_RAG)
238
+
239
+ ## Going Further
240
+
241
+ This project is a minimal implementation. But you can extend it in many ways:
242
+
243
+ - MCP server → the doc reasearch function to a MCP server too connect the documentation to any AI assistant
244
+
245
+ - Vector DBs → scale to millions of doc chunks
246
+ - LangChain / LlamaIndex → ready-made frameworks for RAG pipelines
247
+ - Analytics dashboards → visualize user queries and pain points
248
+ - Multi-source retrieval → pull not just docs, but database entries, blog posts, tickets, etc.
249
+ - Improved prompting → reranking, filtering, and hybrid search (keyword + semantic)
250
+
251
+ ## Limitations We Hit
252
+
253
+ - Chunking and overlap are empirical. The right balance (chunk size, overlap percentage, number of retrieved chunks) requires iteration and testing.
254
+ - Embeddings are not auto-regenerated when docs change. Our system resets embeddings for a file only if the number of chunks differs from what’s stored.
255
+ - In this prototype, embeddings are stored in JSON. This works for demos but pollutes Git. In production, a database or dedicated vector store is better.
256
+
257
+ ## Why This Matters Beyond Docs
258
+
259
+ The interesting part is not just the chatbot. It’s the **feedback loop**.
260
+
261
+ With RAG, you don’t just answer:
262
+
263
+ - You learn what confuses users.
264
+ - You discover which features they expect.
265
+ - You adapt your product strategy based on real queries.
266
+
267
+ **Example:**
268
+
269
+ Imagine launching a new feature and instantly seeing:
270
+
271
+ - 50% of questions are about the same unclear setup step
272
+ - Users repeatedly ask for an integration you don’t support yet
273
+ - People search for terms that reveal a new use case
274
+
275
+ That’s **product intelligence** straight from your users.
276
+
277
+ ## Conclusion
278
+
279
+ RAG is one of the simplest, most powerful ways to make LLMs practical. By combining **retrieval + generation**, you can turn static docs into a **smart assistant** and, at the same time, gain a continuous stream of product insights.
280
+
281
+ For me, this project showed that RAG isn’t just a technical trick. It’s a way to transform documentation into:
282
+
283
+ - an interactive support system
284
+ - a feedback channel
285
+ - a product strategy tool
286
+
287
+ 👉 [Try the demo here](https://intlayer.org/doc/why) 👉 [Check the code template on GitHub](https://github.com/aymericzip/smart_doc_RAG)
288
+
289
+ And if you’re experimenting with RAG too, I’d love to hear how you’re using it.
@@ -0,0 +1,284 @@
1
+ ---
2
+ createdAt: 2025-09-10
3
+ updatedAt: 2025-09-10
4
+ title: Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
5
+ description: Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
6
+ keywords:
7
+ - RAG
8
+ - Documentation
9
+ - Assistant
10
+ - Chunking
11
+ - Embeddings
12
+ - Search
13
+ slugs:
14
+ - blog
15
+ - rag-powered-documentation-assistant
16
+ ---
17
+
18
+ # Building a RAG-Powered Documentation Assistant (Chunking, Embeddings, and Search)
19
+
20
+ ## What you get
21
+
22
+ I built a RAG-powered documentation assistant and packaged it into a boilerplate you can use immediately.
23
+
24
+ - Comes with a ready-to-use application (Next.js + OpenAI API)
25
+ - Includes a working RAG pipeline (chunking, embeddings, cosine similarity)
26
+ - Provides a complete chatbot UI built in React
27
+ - All UI components are fully editable with Tailwind CSS
28
+ - Logs every user query to help identify missing docs, user pain points, and product opportunities
29
+
30
+ 👉 [Live demo](https://intlayer.org/doc/why) 👉 [Code boilerplate](https://github.com/aymericzip/smart_doc_RAG)
31
+
32
+ ## Introduction
33
+
34
+ If you’ve ever been lost in documentation, scrolling endlessly for one answer, you know how painful it can be. Docs are useful, but they’re static and searching them often feels clunky.
35
+
36
+ That’s where **RAG (Retrieval-Augmented Generation)** comes in. Instead of forcing users to dig through text, we can combine **retrieval** (finding the right parts of the doc) with **generation** (letting an LLM explain it naturally).
37
+
38
+ In this post, I’ll walk you through how I built a RAG-powered documentation chatbot and how it doesn’t just help users find answers faster, but also gives product teams a new way to understand user pain points.
39
+
40
+ ## Why Use RAG for Documentation?
41
+
42
+ RAG has become a popular approach for a reason: it’s one of the most practical ways to make large language models actually useful.
43
+
44
+ For documentation, the benefits are clear:
45
+
46
+ - Instant answers: users ask in natural language, and get relevant replies.
47
+ - Better context: the model only sees the most relevant doc sections, reducing hallucinations.
48
+ - Search that feels human: more like Algolia + FAQ + chatbot, rolled into one.
49
+ - Feedback loop: by storing queries, you uncover what users really struggle with.
50
+
51
+ That last point is crucial. A RAG system doesn’t just answer questions, it tells you what people are asking. That means:
52
+
53
+ - You discover missing info in your docs.
54
+ - You see feature requests emerging.
55
+ - You spot patterns that can even guide product strategy.
56
+
57
+ So, RAG isn’t just a support tool. It’s also a **product discovery engine**.
58
+
59
+ ## How the RAG Pipeline Works
60
+
61
+ ![RAG Pipeline](https://github.com/aymericzip/intlayer/blob/main/docs/assets/rag_flow.svg)
62
+
63
+ At a high level, here’s the recipe I used:
64
+
65
+ 1. **Chunking the documentation** Large Markdown files are split into chunks. Chunking allows you to provide as context only the relevant parts of the documentation.
66
+ 2. **Generating embeddings** Each chunk is turned into a vector using OpenAI’s embedding API (text-embedding-3-large) or a vector database (Chroma, Qdrant, Pinecone).
67
+ 3. **Indexing & storing** Embeddings are stored in a simple JSON file (for my demo), but in production, you’d likely use a vector DB.
68
+ 4. **Retrieval (R in RAG)** A user query is embedded, cosine similarity is computed, and the top-matching chunks are retrieved.
69
+ 5. **Augmentation + Generation (AG in RAG)** Those chunks are injected into the prompt for ChatGPT, so the model answers with actual doc context.
70
+ 6. **Logging queries for feedback** Every user query is stored. This is invaluable for understanding pain points, missing documentation, or new opportunities.
71
+
72
+ ## Step 1: Reading the Docs
73
+
74
+ The first step was straightforward: I needed a way to scan a docs/ folder for all .md files. Using Node.js and glob, I fetched the content of each Markdown file into memory.
75
+
76
+ This keeps the pipeline flexible: instead of Markdown, you could fetch docs from a database, a CMS, or even an API.
77
+
78
+ ## Step 2: Chunking the Documentation
79
+
80
+ Why chunk? Because language models have **context limits**. Feeding them an entire book of docs won’t work.
81
+
82
+ So the idea is to break text into manageable chunks (e.g. 500 tokens each) with overlap (e.g. 100 tokens). Overlap ensures continuity so you don’t lose meaning at chunk boundaries.
83
+
84
+ **Example:**
85
+
86
+ - Chunk 1 → “…the old library that many had forgotten. Its towering shelves were filled with books…”
87
+ - Chunk 2 → “…shelves were filled with books from every imaginable genre, each whispering stories…”
88
+
89
+ The overlap ensures both chunks contain shared context, so retrieval remains coherent.
90
+
91
+ This trade-off (chunk size vs overlap) is key for RAG efficiency:
92
+
93
+ - Too small → you get noise.
94
+ - Too large → you blow up context size.
95
+
96
+ ## Step 3: Generating Embeddings
97
+
98
+ Once the docs are chunked, we generate **embeddings** — high-dimensional vectors representing each chunk.
99
+
100
+ I used OpenAI’s text-embedding-3-large model, but you could use any modern embedding model.
101
+
102
+ **Example embedding:**
103
+
104
+ ```js
105
+ [
106
+ -0.0002630692, -0.029749284, 0.010225477, -0.009224428, -0.0065269712,
107
+ -0.002665544, 0.003214777, 0.04235309, -0.033162255, -0.00080789323,
108
+ //...+1533 elements
109
+ ];
110
+ ```
111
+
112
+ Each vector is a mathematical fingerprint of the text, enabling similarity search.
113
+
114
+ ## Step 4: Indexing & Storing Embeddings
115
+
116
+ To avoid regenerating embeddings multiple times, I stored them in embeddings.json.
117
+
118
+ In production, you’d likely want a vector database such as:
119
+
120
+ - Chroma
121
+ - Qdrant
122
+ - Pinecone
123
+ - FAISS, Weaviate, Milvus, etc.
124
+
125
+ Vector DBs handle indexing, scalability, and fast search. But for my prototype, a local JSON worked fine.
126
+
127
+ ## Step 5: Retrieval with Cosine Similarity
128
+
129
+ When a user asks a question:
130
+
131
+ 1. Generate an embedding for the query.
132
+ 2. Compare it to all doc embeddings using **cosine similarity**.
133
+ 3. Keep only the top N most similar chunks.
134
+
135
+ Cosine similarity measures the angle between two vectors. A perfect match scores **1.0**.
136
+
137
+ This way, the system finds the closest doc passages to the query.
138
+
139
+ ## Step 6: Augmentation + Generation
140
+
141
+ Now comes the magic. We take the top chunks and inject them into the **system prompt** for ChatGPT.
142
+
143
+ That means the model answers as if those chunks were part of the conversation.
144
+
145
+ The result: accurate, **doc-grounded responses**.
146
+
147
+ ## Step 7: Logging User Queries
148
+
149
+ This is the hidden superpower.
150
+
151
+ Every question asked is stored. Over time, you build a dataset of:
152
+
153
+ - Most frequent questions (great for FAQs)
154
+ - Unanswered questions (docs are missing or unclear)
155
+ - Feature requests disguised as questions (“Does it integrate with X?”)
156
+ - Emerging use cases you hadn’t planned for
157
+
158
+ This turns your RAG assistant into a **continuous user research tool**.
159
+
160
+ ## What Does It Cost?
161
+
162
+ One common objection to RAG is cost. In practice, it’s surprisingly cheap:
163
+
164
+ - Generating embeddings for ~200 docs takes about **5 minutes** and costs **1–2 euros**.
165
+ - The searching doc feature is 100% free.
166
+ - For queries, we use gpt-4o-latest without “thinking” mode. On Intlayer, we see around **300 chat queries per month**, and the OpenAI API bill rarely exceeds **$10**.
167
+
168
+ On top of that, you can include the hosting cost.
169
+
170
+ ## Implementation Details
171
+
172
+ Stack:
173
+
174
+ - Monorepo: pnpm workspace
175
+ - Doc package: Node.js / TypeScript / OpenAI API
176
+ - Frontend: Next.js / React / Tailwind CSS
177
+ - Backend: Node.js API route / OpenAI API
178
+
179
+ The `@smart-doc/docs` package is a TypeScript package that handles documentation processing. When a markdown file is added or modified, the package includes a `build` script that rebuilds the documentation list in each language, generates embeddings, and stores them in an `embeddings.json` file.
180
+
181
+ For the frontend, we use a Next.js application that provides:
182
+
183
+ - Markdown to HTML rendering
184
+ - A search bar to find relevant documentation
185
+ - A chatbot interface for asking questions about the docs
186
+
187
+ To perform a documentation search, the Next.js application includes an API route that calls a function in the `@smart-doc/docs` package to retrieve doc chunks matching the query. Using these chunks, we can return a list of documentation pages relevant to the user's search.
188
+
189
+ For the chatbot functionality, we follow the same search process but additionally inject the retrieved doc chunks into the prompt sent to ChatGPT.
190
+
191
+ Here's an example of a prompt sent to ChatGPT:
192
+
193
+ System prompt :
194
+
195
+ ```txt
196
+ You are a helpful assistant that can answer questions about the Intlayer documentation.
197
+
198
+ Related chunks :
199
+
200
+ -----
201
+ docName: "getting-started"
202
+ docChunk: "1/3"
203
+ docUrl: "https://example.com/docs/en/getting-started"
204
+ ---
205
+
206
+ # How to get started
207
+
208
+ ...
209
+
210
+ -----
211
+ docName: "another-doc"
212
+ docChunk: "1/5"
213
+ docUrl: "https://example.com/docs/en/another-doc"
214
+ ---
215
+
216
+ # Another doc
217
+
218
+ ...
219
+ ```
220
+
221
+ User query :
222
+
223
+ ```txt
224
+ How to get started?
225
+ ```
226
+
227
+ We use SSE to stream the response from the API route.
228
+
229
+ As mentioned, we use gpt-4-turbo without "thinking" mode. Responses are relevant, and latency is low.
230
+ We experimented with gpt-5, but latency was too high (sometimes up to 15 seconds for a reply). However, we will revisit that in the future.
231
+
232
+ 👉 [Try the demo here](https://intlayer.org/doc/why) 👉 [Check the code template on GitHub](https://github.com/aymericzip/smart_doc_RAG)
233
+
234
+ ## Going Further
235
+
236
+ This project is a minimal implementation. But you can extend it in many ways:
237
+
238
+ - MCP server → the doc research function to a MCP server to connect the documentation to any AI assistant
239
+
240
+ - Vector DBs → scale to millions of doc chunks
241
+ - LangChain / LlamaIndex → ready-made frameworks for RAG pipelines
242
+ - Analytics dashboards → visualise user queries and pain points
243
+ - Multi-source retrieval → pull not just docs, but database entries, blog posts, tickets, etc.
244
+ - Improved prompting → reranking, filtering, and hybrid search (keyword + semantic)
245
+
246
+ ## Limitations We Hit
247
+
248
+ - Chunking and overlap are empirical. The right balance (chunk size, overlap percentage, number of retrieved chunks) requires iteration and testing.
249
+ - Embeddings are not auto-regenerated when docs change. Our system resets embeddings for a file only if the number of chunks differs from what’s stored.
250
+ - In this prototype, embeddings are stored in JSON. This works for demos but pollutes Git. In production, a database or dedicated vector store is preferable.
251
+
252
+ ## Why This Matters Beyond Docs
253
+
254
+ The interesting part is not just the chatbot. It’s the **feedback loop**.
255
+
256
+ With RAG, you don’t just answer:
257
+
258
+ - You learn what confuses users.
259
+ - You discover which features they expect.
260
+ - You adapt your product strategy based on real queries.
261
+
262
+ **Example:**
263
+
264
+ Imagine launching a new feature and instantly seeing:
265
+
266
+ - 50% of questions are about the same unclear setup step
267
+ - Users repeatedly ask for an integration you don’t support yet
268
+ - People search for terms that reveal a new use case
269
+
270
+ That’s **product intelligence** straight from your users.
271
+
272
+ ## Conclusion
273
+
274
+ RAG is one of the simplest, most powerful ways to make LLMs practical. By combining **retrieval + generation**, you can turn static docs into a **smart assistant** and, at the same time, gain a continuous stream of product insights.
275
+
276
+ For me, this project showed that RAG isn’t just a technical trick. It’s a way to transform documentation into:
277
+
278
+ - an interactive support system
279
+ - a feedback channel
280
+ - a product strategy tool
281
+
282
+ 👉 [Try the demo here](https://intlayer.org/doc/why) 👉 [Check the code template on GitHub](https://github.com/aymericzip/smart_doc_RAG)
283
+
284
+ And if you’re experimenting with RAG too, I’d love to hear how you’re using it.