logdetective 2.2.1__tar.gz → 2.3.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. {logdetective-2.2.1 → logdetective-2.3.0}/PKG-INFO +14 -15
  2. {logdetective-2.2.1 → logdetective-2.3.0}/README.md +10 -13
  3. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/extractors.py +3 -0
  4. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/gitlab.py +12 -5
  5. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/models.py +1 -0
  6. logdetective-2.3.0/logdetective.1.asciidoc +96 -0
  7. {logdetective-2.2.1 → logdetective-2.3.0}/pyproject.toml +1 -1
  8. logdetective-2.2.1/logdetective.1.asciidoc +0 -85
  9. {logdetective-2.2.1 → logdetective-2.3.0}/LICENSE +0 -0
  10. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/__init__.py +0 -0
  11. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/constants.py +0 -0
  12. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/drain3.ini +0 -0
  13. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/logdetective.py +0 -0
  14. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/models.py +0 -0
  15. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/prompts-summary-first.yml +0 -0
  16. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/prompts-summary-only.yml +0 -0
  17. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/prompts.yml +0 -0
  18. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/remote_log.py +0 -0
  19. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/__init__.py +0 -0
  20. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/compressors.py +0 -0
  21. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/config.py +0 -0
  22. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/__init__.py +0 -0
  23. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/base.py +0 -0
  24. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/models/__init__.py +0 -0
  25. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/models/exceptions.py +0 -0
  26. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/models/koji.py +0 -0
  27. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/models/merge_request_jobs.py +0 -0
  28. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/database/models/metrics.py +0 -0
  29. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/emoji.py +0 -0
  30. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/exceptions.py +0 -0
  31. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/koji.py +0 -0
  32. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/llm.py +0 -0
  33. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/metric.py +0 -0
  34. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/plot.py +0 -0
  35. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/server.py +0 -0
  36. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/templates/gitlab_full_comment.md.j2 +0 -0
  37. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/templates/gitlab_short_comment.md.j2 +0 -0
  38. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/server/utils.py +0 -0
  39. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/skip_snippets.yml +0 -0
  40. {logdetective-2.2.1 → logdetective-2.3.0}/logdetective/utils.py +0 -0
@@ -1,8 +1,9 @@
1
- Metadata-Version: 2.3
1
+ Metadata-Version: 2.4
2
2
  Name: logdetective
3
- Version: 2.2.1
3
+ Version: 2.3.0
4
4
  Summary: Log using LLM AI to search for build/test failures and provide ideas for fixing these.
5
5
  License: Apache-2.0
6
+ License-File: LICENSE
6
7
  Author: Jiri Podivin
7
8
  Author-email: jpodivin@gmail.com
8
9
  Requires-Python: >=3.11,<4.0
@@ -15,6 +16,7 @@ Classifier: Programming Language :: Python :: 3
15
16
  Classifier: Programming Language :: Python :: 3.11
16
17
  Classifier: Programming Language :: Python :: 3.12
17
18
  Classifier: Programming Language :: Python :: 3.13
19
+ Classifier: Programming Language :: Python :: 3.14
18
20
  Classifier: Topic :: Internet :: Log Analysis
19
21
  Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
20
22
  Classifier: Topic :: Software Development :: Debuggers
@@ -93,12 +95,13 @@ Usage
93
95
  -----
94
96
 
95
97
  To analyze a log file, run the script with the following command line arguments:
96
- - `url` (required): The URL of the log file to be analyzed.
97
- - `--model` (optional, default: "Mistral-7B-Instruct-v0.2-GGUF"): The path or URL of the language model for analysis. As we are using LLama.cpp we want this to be in the `gguf` format. You can include the download link to the model here. If the model is already on your machine it will skip the download.
98
+ - `file` (required): The path or URL of the log file to be analyzed.
99
+ - `--model` (optional, default: "Mistral-7B-Instruct-v0.3-GGUF"): The path or Hugging space name of the language model for analysis. For models from Hugging Face, write them as `namespace/repo_name`. As we are using LLama.cpp we want this to be in the `gguf` format. If the model is already on your machine it will skip the download.
100
+ - `--filename_suffix` (optional, default "Q4_K.gguf"): You can specify which suffix of the file to use. This option is applied when specifying model using the Hugging Face repository.
98
101
  - `--summarizer` DISABLED: LLM summarization option was removed. Argument is kept for backward compatibility only.(optional, default: "drain"): Choose between LLM and Drain template miner as the log summarizer. You can also provide the path to an existing language model file instead of using a URL.
99
102
  - `--n_lines` DISABLED: LLM summarization option was removed. Argument is kept for backward compatibility only. (optional, default: 8): The number of lines per chunk for LLM analysis. This only makes sense when you are summarizing with LLM.
100
- - `--n_clusters` (optional, default 8): Number of clusters for Drain to organize log chunks into. This only makes sense when you are summarizing with Drain
101
- - `--skip_snippets` Path to patterns for skipping snippets.
103
+ - `--n_clusters` (optional, default 8): Number of clusters for Drain to organize log chunks into. This only makes sense when you are summarizing with Drain.
104
+ - `--skip_snippets` Path to patterns for skipping snippets (in YAML).
102
105
 
103
106
  Example usage:
104
107
 
@@ -108,14 +111,10 @@ Or if the log file is stored locally:
108
111
 
109
112
  logdetective ./data/logs.txt
110
113
 
111
- Example you want to use a different model:
114
+ Examples of using different models. Note the use of `--filename_suffix` (or `-F`) option, useful for models that were quantized:
112
115
 
113
- logdetective https://example.com/logs.txt --model https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q5_K_S.gguf?download=true
114
- logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF
115
-
116
- Example of different suffix (useful for models that were quantized)
117
-
118
- logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --model 'fedora-copr/granite-3.2-8b-instruct-GGUF' -F Q4_K.gguf
116
+ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --filename_suffix Q5_K_S.gguf
117
+ logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --model 'fedora-copr/granite-3.2-8b-instruct-GGUF' -F Q4_K_M.gguf
119
118
 
120
119
  Example of altered prompts:
121
120
 
@@ -124,9 +123,9 @@ Example of altered prompts:
124
123
  logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --prompts ~/my-prompts.yml
125
124
 
126
125
 
127
- Note that streaming with some models (notably Meta-Llama-3 is broken) is broken and can be worked around by `no-stream` option:
126
+ Note that streaming with some models (notably Meta-Llama-3) is broken and can be worked around by `no-stream` option:
128
127
 
129
- logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --no-stream
128
+ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --filename_suffix Q5_K_M.gguf --no-stream
130
129
 
131
130
 
132
131
  Real Example
@@ -41,12 +41,13 @@ Usage
41
41
  -----
42
42
 
43
43
  To analyze a log file, run the script with the following command line arguments:
44
- - `url` (required): The URL of the log file to be analyzed.
45
- - `--model` (optional, default: "Mistral-7B-Instruct-v0.2-GGUF"): The path or URL of the language model for analysis. As we are using LLama.cpp we want this to be in the `gguf` format. You can include the download link to the model here. If the model is already on your machine it will skip the download.
44
+ - `file` (required): The path or URL of the log file to be analyzed.
45
+ - `--model` (optional, default: "Mistral-7B-Instruct-v0.3-GGUF"): The path or Hugging space name of the language model for analysis. For models from Hugging Face, write them as `namespace/repo_name`. As we are using LLama.cpp we want this to be in the `gguf` format. If the model is already on your machine it will skip the download.
46
+ - `--filename_suffix` (optional, default "Q4_K.gguf"): You can specify which suffix of the file to use. This option is applied when specifying model using the Hugging Face repository.
46
47
  - `--summarizer` DISABLED: LLM summarization option was removed. Argument is kept for backward compatibility only.(optional, default: "drain"): Choose between LLM and Drain template miner as the log summarizer. You can also provide the path to an existing language model file instead of using a URL.
47
48
  - `--n_lines` DISABLED: LLM summarization option was removed. Argument is kept for backward compatibility only. (optional, default: 8): The number of lines per chunk for LLM analysis. This only makes sense when you are summarizing with LLM.
48
- - `--n_clusters` (optional, default 8): Number of clusters for Drain to organize log chunks into. This only makes sense when you are summarizing with Drain
49
- - `--skip_snippets` Path to patterns for skipping snippets.
49
+ - `--n_clusters` (optional, default 8): Number of clusters for Drain to organize log chunks into. This only makes sense when you are summarizing with Drain.
50
+ - `--skip_snippets` Path to patterns for skipping snippets (in YAML).
50
51
 
51
52
  Example usage:
52
53
 
@@ -56,14 +57,10 @@ Or if the log file is stored locally:
56
57
 
57
58
  logdetective ./data/logs.txt
58
59
 
59
- Example you want to use a different model:
60
+ Examples of using different models. Note the use of `--filename_suffix` (or `-F`) option, useful for models that were quantized:
60
61
 
61
- logdetective https://example.com/logs.txt --model https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q5_K_S.gguf?download=true
62
- logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF
63
-
64
- Example of different suffix (useful for models that were quantized)
65
-
66
- logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --model 'fedora-copr/granite-3.2-8b-instruct-GGUF' -F Q4_K.gguf
62
+ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --filename_suffix Q5_K_S.gguf
63
+ logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --model 'fedora-copr/granite-3.2-8b-instruct-GGUF' -F Q4_K_M.gguf
67
64
 
68
65
  Example of altered prompts:
69
66
 
@@ -72,9 +69,9 @@ Example of altered prompts:
72
69
  logdetective https://kojipkgs.fedoraproject.org//work/tasks/3367/131313367/build.log --prompts ~/my-prompts.yml
73
70
 
74
71
 
75
- Note that streaming with some models (notably Meta-Llama-3 is broken) is broken and can be worked around by `no-stream` option:
72
+ Note that streaming with some models (notably Meta-Llama-3) is broken and can be worked around by `no-stream` option:
76
73
 
77
- logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --no-stream
74
+ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --filename_suffix Q5_K_M.gguf --no-stream
78
75
 
79
76
 
80
77
  Real Example
@@ -26,6 +26,9 @@ class Extractor:
26
26
  self.skip_snippets = skip_snippets
27
27
  self.max_snippet_len = max_snippet_len
28
28
 
29
+ if self.verbose:
30
+ LOG.setLevel(logging.DEBUG)
31
+
29
32
  def __call__(self, log: str) -> list[Tuple[int, str]]:
30
33
  raise NotImplementedError
31
34
 
@@ -11,9 +11,13 @@ import gitlab.v4
11
11
  import gitlab.v4.objects
12
12
  import jinja2
13
13
  import aiohttp
14
+ import backoff
14
15
 
15
16
  from logdetective.server.config import SERVER_CONFIG, LOG
16
- from logdetective.server.exceptions import LogsTooLargeError
17
+ from logdetective.server.exceptions import (
18
+ LogsTooLargeError,
19
+ LogDetectiveConnectionError,
20
+ )
17
21
  from logdetective.server.llm import perform_staged_analysis
18
22
  from logdetective.server.metric import add_new_metrics, update_metrics
19
23
  from logdetective.server.models import (
@@ -29,6 +33,7 @@ from logdetective.server.database.models import (
29
33
  GitlabMergeRequestJobs,
30
34
  )
31
35
  from logdetective.server.compressors import RemoteLogCompressor
36
+ from logdetective.server.utils import connection_error_giveup
32
37
 
33
38
  MR_REGEX = re.compile(r"refs/merge-requests/(\d+)/.*$")
34
39
  FAILURE_LOG_REGEX = re.compile(r"(\w*\.log)")
@@ -91,8 +96,8 @@ async def process_gitlab_job_event(
91
96
  log_url, preprocessed_log = await retrieve_and_preprocess_koji_logs(
92
97
  gitlab_cfg, job
93
98
  )
94
- except LogsTooLargeError:
95
- LOG.error("Could not retrieve logs. Too large.")
99
+ except (LogsTooLargeError, LogDetectiveConnectionError) as ex:
100
+ LOG.error("Could not retrieve logs due to %s", ex)
96
101
  raise
97
102
 
98
103
  # Submit log to Log Detective and await the results.
@@ -151,6 +156,9 @@ def is_eligible_package(project_name: str):
151
156
  return True
152
157
 
153
158
 
159
+ @backoff.on_exception(
160
+ backoff.expo, ConnectionResetError, max_time=60, on_giveup=connection_error_giveup
161
+ )
154
162
  async def retrieve_and_preprocess_koji_logs(
155
163
  gitlab_cfg: GitLabInstanceConfig,
156
164
  job: gitlab.v4.objects.ProjectJob,
@@ -256,8 +264,7 @@ async def retrieve_and_preprocess_koji_logs(
256
264
  LOG.debug("Failed architecture: %s", failed_arch)
257
265
 
258
266
  log_path = failed_arches[failed_arch].as_posix()
259
-
260
- log_url = f"{gitlab_cfg.api_path}/projects/{job.project_id}/jobs/{job.id}/artifacts/{log_path}" # pylint: disable=line-too-long
267
+ log_url = f"{gitlab_cfg.url}/{gitlab_cfg.api_path}/projects/{job.project_id}/jobs/{job.id}/artifacts/{log_path}" # pylint: disable=line-too-long
261
268
  LOG.debug("Returning contents of %s%s", gitlab_cfg.url, log_url)
262
269
 
263
270
  # Return the log as a file-like object with .read() function
@@ -307,6 +307,7 @@ class GitLabInstanceConfig(BaseModel): # pylint: disable=too-many-instance-attr
307
307
 
308
308
  name: str = None
309
309
  url: str = None
310
+ # Path to API of the gitlab instance, assuming `url` as prefix.
310
311
  api_path: str = None
311
312
  api_token: str = None
312
313
 
@@ -0,0 +1,96 @@
1
+ = logdetective(1)
2
+ :doctype: manpage
3
+ :man source: logdetective 1.0
4
+ :man manual: User Commands
5
+
6
+ == NAME
7
+
8
+ logdetective - Analyze and summarize log files using LLM and Drain templates
9
+
10
+ == SYNOPSIS
11
+
12
+ *logdetective* [_OPTIONS_] *file*
13
+
14
+ == DESCRIPTION
15
+
16
+ *logdetective* is a tool that analyzes log files with a LLM using the Drain log template miner. It can consume logs from a local path or a URL, summarize them, and cluster them for easier inspection.
17
+
18
+ == POSITIONAL ARGUMENTS
19
+
20
+ *file*::
21
+ The URL or path to the log file to be analyzed.
22
+
23
+ == OPTIONS
24
+
25
+ *-h*, *--help*::
26
+ Show usage description and exit.
27
+
28
+ *-M* *MODEL*, *--model* *MODEL*::
29
+ The path to the language model for analysis (if stored locally). You can also specify the model by name based on the repo on Hugging face (see Examples). Repo id must be in the form `'namespace/repo_name'`. As we are using `LLama.cpp` we want this to be in the gguf format. If the model is already on your machine it will skip the download. (optional, default: "fedora-copr/Mistral-7B-Instruct-v0.3-GGUF")
30
+
31
+ *-F* *FILENAME_SUFFIX*, *--filename_suffix* *FILENAME_SUFFIX*::
32
+ Define the suffix of the model file name to retrieve from Hugging Face. This option only applies when the model is specified by its Hugging face repo name, and not its path. (default `Q4_K.gguf`)
33
+
34
+ *-n*, *--no-stream*::
35
+ Disable streaming output of analysis results.
36
+
37
+ *-C* *N_CLUSTERS*, *--n_clusters* *N_CLUSTERS*::
38
+ Number of clusters to use with the Drain summarizer. Ignored if `LLM` summarizer is selected. (optional, default 8)
39
+
40
+ *-v*, *--verbose*::
41
+ Enable verbose output during processing (use `-vv` or `-vvv` for higher levels of verbosity).
42
+
43
+ *-q*, *--quiet*::
44
+ Suppress non-essential output.
45
+
46
+ *--prompts* *PROMPTS_FILE*::
47
+ Path to prompt configuration file where you can customize (override default) prompts sent to the LLM. See https://github.com/fedora-copr/logdetective/blob/main/logdetective/prompts.yml for reference.
48
+ +
49
+ Prompts need to have a form compatible with Python format string syntax (see https://docs.python.org/3/library/string.html#format-string-syntax) with spaces, or replacement fields marked with curly braces, `{}` left for insertion of snippets. Number of replacement fields in new prompts must be the same as in original, although their position may be different.
50
+
51
+ *--temperature* *TEMPERATURE*::
52
+ Temperature for inference. Higher temperature leads to more creative and variable outputs. (default 0.8)
53
+
54
+ *--csgrep*::
55
+ Use `csgrep` to process the log before analysis. When working with logs containing messages from GCC, it can be beneficial to employ additional extractor based on `csgrep` tool, to ensure that the messages are kept intact. Since `csgrep` is not available as a Python package, it must be installed separately, with a package manager or from https://github.com/csutils/csdiff.
56
+
57
+ *--skip_snippets* *SNIPPETS_FILE*::
58
+ Path to patterns for skipping snippets. User can specify regular expressions matching log chunks (which may not contribute to the analysis of the problem), along with simple description. Patterns to be skipped must be defined in a `yaml` file as a dictionary, where key is a description and value is a regex.
59
+ +
60
+ Examples:
61
+
62
+ contains_capital_a: "^.*A.*"
63
+
64
+ starts_with_numeric: "^[0-9].*"
65
+
66
+ child_exit_code_zero: "Child return code was: 0"
67
+
68
+ == EXAMPLES
69
+
70
+ Example usage:
71
+
72
+ $ logdetective https://example.com/logs.txt
73
+
74
+ Or if the log file is stored locally:
75
+
76
+ $ logdetective ./data/logs.txt
77
+
78
+ Analyze a local log file using a LLM found locally:
79
+
80
+ $ logdetective -M /path/to/llm /var/log/syslog
81
+
82
+ With specific model from HuggingFace (`namespace/repo_name`, note that `--filename_suffix` is also needed):
83
+
84
+ $ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF --filename_suffix Q5_K_S.gguf
85
+
86
+ Cluster logs from a URL (using Drain):
87
+
88
+ $ logdetective -C 10 https://example.com/logs.txt
89
+
90
+ == SEE ALSO
91
+
92
+ https://logdetective.com
93
+
94
+ == ADDITIONAL NOTES
95
+
96
+ Note that *logdetective* works as intended with instruction tuned text generation models in GGUF format.
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "logdetective"
3
- version = "2.2.1"
3
+ version = "2.3.0"
4
4
  description = "Log using LLM AI to search for build/test failures and provide ideas for fixing these."
5
5
  authors = ["Jiri Podivin <jpodivin@gmail.com>"]
6
6
  license = "Apache-2.0"
@@ -1,85 +0,0 @@
1
- = logdetective(1)
2
- :doctype: manpage
3
- :man source: logdetective 1.0
4
- :man manual: User Commands
5
-
6
- == NAME
7
-
8
- logdetective - Analyze and summarize log files using LLM or Drain templates
9
-
10
- == SYNOPSIS
11
-
12
- *logdetective* [_OPTIONS_] *file*
13
-
14
- == DESCRIPTION
15
-
16
- *logdetective* is a tool that analyzes log files using either a large language
17
- model (LLM) or the Drain log template miner. It can consume logs from a local
18
- path or a URL, summarize them, and cluster them for easier inspection.
19
-
20
- == POSITIONAL ARGUMENTS
21
-
22
- *file*::
23
- The URL or path to the log file to be analyzed.
24
-
25
- == OPTIONS
26
-
27
- *-h*, *--help*::
28
- Show this help message and exit.
29
-
30
- *-M* *MODEL*, *--model* *MODEL*::
31
- The path or URL of the language model for analysis. As we are using LLama.cpp we want this to be in the gguf format. You can include the download link to the model here. If the model is already on your machine it will skip the download. (optional, default: "Mistral-7B-Instruct-v0.2-GGUF")
32
-
33
- *-F* *FILENAME_SUFFIX*, *--filename_suffix* *FILENAME_SUFFIX*::
34
- Define the suffix of the model file name to retrieve from Hugging Face. This option only applies when the model is specified by name (not a path).
35
-
36
- *-n*, *--no-stream*::
37
- Disable streaming output of analysis results.
38
-
39
- *-S* *SUMMARIZER*, *--summarizer* *SUMMARIZER*::
40
- Choose between LLM and Drain template miner as the log summarizer. You can also provide the path to an existing language model file instead of using a URL. (optional, default: "drain")
41
-
42
- *-N* *N_LINES*, *--n_lines* *N_LINES*::
43
- Number of lines per chunk for LLM analysis. Only applicable when `LLM` is used as the summarizer. (optional, default: 8)
44
-
45
- *-C* *N_CLUSTERS*, *--n_clusters* *N_CLUSTERS*::
46
- Number of clusters to use with the Drain summarizer. Ignored if `LLM` summarizer is selected. (optional, default 8)
47
-
48
- *-v*, *--verbose*::
49
- Enable verbose output during processing.
50
-
51
- *-q*, *--quiet*::
52
- Suppress non-essential output.
53
-
54
- *--prompts* *PROMPTS*::
55
- Path to prompt configuration file where you can customize prompts sent to `LLM`.
56
-
57
- *--temperature* *TEMPERATURE*::
58
- Temperature for inference.
59
-
60
- == EXAMPLES
61
-
62
- Example usage:
63
-
64
- $ logdetective https://example.com/logs.txt
65
-
66
- Or if the log file is stored locally:
67
-
68
- $ logdetective ./data/logs.txt
69
-
70
- Analyze a local log file using an LLM model:
71
-
72
- $ logdetective -M /path/to/llm-model -S LLM -N 100 /var/log/syslog
73
-
74
- With specific models:
75
-
76
- $ logdetective https://example.com/logs.txt --model https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q5_K_S.gguf?download=true
77
- $ logdetective https://example.com/logs.txt --model QuantFactory/Meta-Llama-3-8B-Instruct-GGUF
78
-
79
- Cluster logs from a URL using Drain:
80
-
81
- $ logdetective -S Drain -C 10 https://example.com/logs.txt
82
-
83
- == SEE ALSO
84
-
85
- https://logdetective.com
File without changes