pedicab 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. checksums.yaml +4 -4
  2. data/API.md +401 -0
  3. data/EXAMPLES.md +884 -0
  4. data/Gemfile.lock +10 -24
  5. data/INSTALLATION.md +652 -0
  6. data/README.md +329 -10
  7. data/lib/pedicab/#city.rb# +27 -0
  8. data/lib/pedicab/ride.rb +60 -81
  9. data/lib/pedicab/version.rb +1 -1
  10. data/lib/pedicab.py +3 -8
  11. data/lib/pedicab.rb +141 -133
  12. metadata +6 -89
  13. data/#README.md# +0 -51
  14. data/books/Arnold_Bennett-How_to_Live_on_24_Hours_a_Day.txt +0 -1247
  15. data/books/Edward_L_Bernays-crystallizing_public_opinion.txt +0 -4422
  16. data/books/Emma_Goldman-Anarchism_and_Other_Essays.txt +0 -7654
  17. data/books/Office_of_Strategic_Services-Simple_Sabotage_Field_Manual.txt +0 -1057
  18. data/books/Sigmund_Freud-Group_Psychology_and_The_Analysis_of_The_Ego.txt +0 -2360
  19. data/books/Steve_Hassan-The_Bite_Model.txt +0 -130
  20. data/books/Steve_Hassan-The_Bite_Model.txt~ +0 -132
  21. data/books/Sun_Tzu-Art_of_War.txt +0 -159
  22. data/books/Sun_Tzu-Art_of_War.txt~ +0 -166
  23. data/books/US-Constitution.txt +0 -502
  24. data/books/US-Constitution.txt~ +0 -502
  25. data/books/cia-kubark.txt +0 -4637
  26. data/books/machiavelli-the_prince.txt +0 -4599
  27. data/books/sun_tzu-art_of_war.txt +0 -1017
  28. data/books/us_army-bayonette.txt +0 -843
  29. data/lib/pedicab/calc.rb~ +0 -8
  30. data/lib/pedicab/link.rb +0 -38
  31. data/lib/pedicab/link.rb~ +0 -14
  32. data/lib/pedicab/mark.rb +0 -9
  33. data/lib/pedicab/mark.rb~ +0 -5
  34. data/lib/pedicab/on.rb +0 -6
  35. data/lib/pedicab/on.rb~ +0 -6
  36. data/lib/pedicab/poke.rb +0 -14
  37. data/lib/pedicab/poke.rb~ +0 -15
  38. data/lib/pedicab/query.rb +0 -92
  39. data/lib/pedicab/query.rb~ +0 -93
  40. data/lib/pedicab/rank.rb +0 -92
  41. data/lib/pedicab/rank.rb~ +0 -89
  42. data/lib/pedicab/ride.rb~ +0 -101
  43. data/lib/pedicab.sh~ +0 -3
data/Gemfile.lock CHANGED
@@ -1,42 +1,28 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- pedicab (0.3.0)
4
+ pedicab (0.3.2)
5
5
  benchmark
6
- eqn
7
- httparty
6
+ carpet
8
7
  json
9
- nokogiri
10
8
  open3
11
- redcarpet
12
9
 
13
10
  GEM
14
11
  remote: https://rubygems.org/
15
12
  specs:
16
13
  benchmark (0.5.0)
17
- bigdecimal (3.3.1)
18
- csv (3.3.5)
19
- eqn (1.6.5)
20
- treetop (>= 1.2.0)
21
- httparty (0.23.2)
22
- csv
23
- mini_mime (>= 1.0.0)
24
- multi_xml (>= 0.5.2)
14
+ carpet (0.1.0)
15
+ pry
16
+ redcarpet
17
+ coderay (1.1.3)
25
18
  json (2.16.0)
26
- mini_mime (1.1.5)
27
- mini_portile2 (2.8.9)
28
- multi_xml (0.7.1)
29
- bigdecimal (~> 3.1)
30
- nokogiri (1.18.10)
31
- mini_portile2 (~> 2.8.2)
32
- racc (~> 1.4)
19
+ method_source (1.1.0)
33
20
  open3 (0.2.1)
34
- polyglot (0.3.5)
35
- racc (1.8.1)
21
+ pry (0.15.2)
22
+ coderay (~> 1.1)
23
+ method_source (~> 1.0)
36
24
  rake (13.0.6)
37
25
  redcarpet (3.6.1)
38
- treetop (1.6.18)
39
- polyglot (~> 0.3)
40
26
 
41
27
  PLATFORMS
42
28
  x86_64-linux
data/INSTALLATION.md ADDED
@@ -0,0 +1,652 @@
1
+ # Installation and Setup Guide
2
+
3
+ ## Table of Contents
4
+
5
+ 1. [System Requirements](#system-requirements)
6
+ 2. [Installation](#installation)
7
+ 3. [Model Setup](#model-setup)
8
+ 4. [Configuration](#configuration)
9
+ 5. [Verification](#verification)
10
+ 6. [Troubleshooting](#troubleshooting)
11
+ 7. [Platform-Specific Notes](#platform-specific-notes)
12
+
13
+ ---
14
+
15
+ ## System Requirements
16
+
17
+ ### Minimum Requirements
18
+
19
+ - **Ruby**: >= 2.6.0
20
+ - **Python**: >= 3.7
21
+ - **RAM**: 4GB minimum (8GB+ recommended for larger models)
22
+ - **Storage**: 2GB free space for models + application
23
+ - **OS**: Linux, macOS, or Windows (with WSL2)
24
+
25
+ ### Recommended Requirements
26
+
27
+ - **Ruby**: >= 3.0.0
28
+ - **Python**: >= 3.9
29
+ - **RAM**: 16GB+ for optimal performance
30
+ - **GPU**: CUDA-compatible GPU (optional, for acceleration)
31
+ - **Storage**: 10GB+ free space
32
+
33
+ ### Software Dependencies
34
+
35
+ #### Ruby Gems
36
+ - `benchmark` (built-in)
37
+ - `open3` (built-in)
38
+ - `json` (built-in)
39
+ - `erb` (built-in)
40
+ - `carpet`
41
+
42
+ #### Python Packages
43
+ - `llama-cpp-python`
44
+
45
+ ---
46
+
47
+ ## Installation
48
+
49
+ ### Option 1: Install from RubyGems (Recommended)
50
+
51
+ ```bash
52
+ # Install the gem
53
+ gem install pedicab
54
+
55
+ # Install Python dependency
56
+ pip install llama-cpp-python
57
+ ```
58
+
59
+ ### Option 2: Install from Source
60
+
61
+ ```bash
62
+ # Clone the repository
63
+ git clone https://github.com/xorgnak/pedicab.git
64
+ cd pedicab
65
+
66
+ # Install the gem
67
+ bundle install
68
+ rake install
69
+
70
+ # Install Python dependencies
71
+ pip install llama-cpp-python
72
+ ```
73
+
74
+ ### Option 3: Development Setup
75
+
76
+ ```bash
77
+ # Clone for development
78
+ git clone https://github.com/xorgnak/pedicab.git
79
+ cd pedicab
80
+
81
+ # Install development dependencies
82
+ bundle install
83
+
84
+ # Install Python dependencies
85
+ pip install llama-cpp-python
86
+
87
+ # Run tests (if available)
88
+ rake test
89
+ ```
90
+
91
+ ---
92
+
93
+ ## Model Setup
94
+
95
+ ### Model Format Requirements
96
+
97
+ Pedicab uses **GGUF** (GPT-Generated Unified Format) models. These are:
98
+ - Quantized models optimized for CPU inference
99
+ - Compatible with llama.cpp and llama-cpp-python
100
+ - Single file containing the complete model
101
+
102
+ ### Recommended Models
103
+
104
+ #### Small Models (1-3GB)
105
+ - **Qwen2.5-1.5B-Instruct-Q4_K_M.gguf** - Good balance of speed and quality
106
+ - **Phi-3-mini-4k-instruct-q4.gguf** - Excellent for general tasks
107
+ - **Gemma-2B-it-Q4_K_M.gguf** - Google's efficient model
108
+
109
+ #### Medium Models (4-7GB)
110
+ - **Qwen2.5-7B-Instruct-Q4_K_M.gguf** - High quality, reasonably fast
111
+ - **Llama-3.1-8B-Instruct-Q4_K_M.gguf** - Very capable model
112
+ - **Mistral-7B-Instruct-v0.2-Q4_K_M.gguf** - Popular and reliable
113
+
114
+ #### Large Models (8GB+)
115
+ - **Qwen2.5-14B-Instruct-Q4_K_M.gguf** - Excellent reasoning
116
+ - **Llama-3.1-70B-Instruct-Q4_K_M.gguf** - Top-tier performance (requires significant RAM)
117
+
118
+ ### Downloading Models
119
+
120
+ #### Method 1: Hugging Face
121
+
122
+ ```bash
123
+ # Example: Download Qwen 1.5B model
124
+ pip install huggingface_hub
125
+ python -c "
126
+ from huggingface_hub import hf_hub_download
127
+ hf_hub_download(
128
+ repo_id='Qwen/Qwen2.5-1.5B-Instruct-GGUF',
129
+ filename='qwen2.5-1.5b-instruct-q4_k_m.gguf',
130
+ local_dir='/models'
131
+ )
132
+ "
133
+ ```
134
+
135
+ #### Method 2: Direct Download
136
+
137
+ ```bash
138
+ # Create models directory
139
+ sudo mkdir -p /models
140
+ sudo chmod 755 /models
141
+
142
+ # Download model (example URLs - replace with actual URLs)
143
+ wget -O /models/qwen.gguf https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF/resolve/main/qwen2.5-1.5b-instruct-q4_k_m.gguf
144
+ ```
145
+
146
+ #### Method 3: Using Model Managers
147
+
148
+ ```bash
149
+ # Using ollama (if installed)
150
+ ollama pull qwen2.5:1.5b
151
+ # Then locate the GGUF file in ollama's model directory and copy to /models/
152
+ ```
153
+
154
+ ### Model Organization
155
+
156
+ ```
157
+ /models/
158
+ ├── qwen.gguf # Default model (named for easy reference)
159
+ ├── llama2-7b.gguf # Larger model for complex tasks
160
+ ├── mistral-7b.gguf # Alternative model
161
+ └── gemma-2b.gguf # Small fast model
162
+ ```
163
+
164
+ ---
165
+
166
+ ## Configuration
167
+
168
+ ### Environment Variables
169
+
170
+ Create or edit your shell configuration file (`~/.bashrc`, `~/.zshrc`, etc.):
171
+
172
+ ```bash
173
+ # Default model to use
174
+ export MODEL=qwen
175
+
176
+ # Debug level (0=off, 1=basic, 2=verbose)
177
+ export DEBUG=0
178
+
179
+ # Optional: Custom model directory
180
+ export PEDICAB_MODELS_DIR="/models"
181
+
182
+ # Optional: Python backend path (usually not needed)
183
+ export PEDICAB_PYTHON_PATH="/usr/bin/python3"
184
+ ```
185
+
186
+ Apply the changes:
187
+
188
+ ```bash
189
+ source ~/.bashrc # or ~/.zshrc
190
+ ```
191
+
192
+ ### System-wide Installation
193
+
194
+ #### Install Python Backend System-wide
195
+
196
+ ```bash
197
+ # Copy Python backend to system location
198
+ sudo cp lib/pedicab.py /usr/lib/
199
+ sudo cp lib/pedicab.sh /usr/bin/pedicab
200
+ sudo chmod +x /usr/bin/pedicab
201
+
202
+ # Make models directory system-wide
203
+ sudo mkdir -p /models
204
+ sudo chmod 755 /models
205
+ ```
206
+
207
+ #### Create Wrapper Script (Optional)
208
+
209
+ Create `/usr/local/bin/pedicab-ruby`:
210
+
211
+ ```bash
212
+ #!/bin/bash
213
+ export MODEL=${MODEL:-qwen}
214
+ export DEBUG=${DEBUG:-0}
215
+ ruby -r pedicab -e "$@"
216
+ ```
217
+
218
+ Make it executable:
219
+
220
+ ```bash
221
+ sudo chmod +x /usr/local/bin/pedicab-ruby
222
+ ```
223
+
224
+ ### Development Configuration
225
+
226
+ For development, you might want to set up:
227
+
228
+ ```bash
229
+ # Development environment
230
+ export PEDICAB_DEV=1
231
+ export PEDICAB_MODELS_DIR="./models" # Local models directory
232
+ export DEBUG=2 # Verbose debugging
233
+ ```
234
+
235
+ ---
236
+
237
+ ## Verification
238
+
239
+ ### Basic Installation Test
240
+
241
+ Create a test script `test_pedicab.rb`:
242
+
243
+ ```ruby
244
+ #!/usr/bin/env ruby
245
+ require 'pedicab'
246
+
247
+ puts "Testing Pedicab installation..."
248
+
249
+ # Test 1: Check models
250
+ puts "Available models: #{Pedicab.models.inspect}"
251
+
252
+ # Test 2: Basic conversation
253
+ begin
254
+ ai = Pedicab['test']
255
+ response = ai["Say 'Hello, World!'"]
256
+ puts "Response: #{response.out}"
257
+ puts "Time: #{response.took}s"
258
+
259
+ if response.out.include?("Hello")
260
+ puts "✓ Basic conversation test passed"
261
+ else
262
+ puts "✗ Unexpected response"
263
+ end
264
+ rescue => e
265
+ puts "✗ Error: #{e.message}"
266
+ end
267
+
268
+ # Test 3: Conditional logic
269
+ begin
270
+ if ai.ride.if?("1+1 equals 2")
271
+ puts "✓ Conditional logic test passed"
272
+ else
273
+ puts "✗ Conditional logic test failed"
274
+ end
275
+ rescue => e
276
+ puts "✗ Error in conditional test: #{e.message}"
277
+ end
278
+
279
+ puts "Testing complete!"
280
+ ```
281
+
282
+ Run the test:
283
+
284
+ ```bash
285
+ ruby test_pedicab.rb
286
+ ```
287
+
288
+ ### Performance Benchmark
289
+
290
+ Create `benchmark.rb`:
291
+
292
+ ```ruby
293
+ require 'pedicab'
294
+ require 'benchmark'
295
+
296
+ puts "Running Pedicab performance benchmark..."
297
+
298
+ ai = Pedicab['benchmark']
299
+ prompts = [
300
+ "What is Ruby?",
301
+ "Explain variables",
302
+ "What are loops?",
303
+ "Define functions",
304
+ "How does OOP work?"
305
+ ]
306
+
307
+ total_time = Benchmark.realtime do
308
+ prompts.each_with_index do |prompt, i|
309
+ puts "Test #{i + 1}: #{prompt}"
310
+ response = ai[prompt]
311
+ puts " Response time: #{response.took.round(3)}s"
312
+ puts " Response length: #{response.out.length} chars"
313
+ puts
314
+ end
315
+ end
316
+
317
+ puts "Total time: #{total_time.round(3)}s"
318
+ puts "Average per request: #{(total_time / prompts.length).round(3)}s"
319
+ puts "Total conversation time: #{ai.life.round(3)}s"
320
+ ```
321
+
322
+ Run the benchmark:
323
+
324
+ ```bash
325
+ ruby benchmark.rb
326
+ ```
327
+
328
+ ---
329
+
330
+ ## Troubleshooting
331
+
332
+ ### Common Issues
333
+
334
+ #### 1. "No such file or directory - pedicab"
335
+
336
+ **Problem**: The Python backend script isn't found.
337
+
338
+ **Solutions**:
339
+ ```bash
340
+ # Check if pedicab command exists
341
+ which pedicab
342
+
343
+ # Install system-wide
344
+ sudo cp lib/pedicab.py /usr/lib/
345
+ sudo cp lib/pedicab.sh /usr/bin/pedicab
346
+ sudo chmod +x /usr/bin/pedicab
347
+
348
+ # Or update PATH
349
+ export PATH="$PATH:$(pwd)/lib"
350
+ ```
351
+
352
+ #### 2. "Model file not found"
353
+
354
+ **Problem**: The specified GGUF model doesn't exist.
355
+
356
+ **Solutions**:
357
+ ```bash
358
+ # Check models directory
359
+ ls -la /models/
360
+
361
+ # Check model naming
362
+ export MODEL="qwen" # Don't include .gguf extension
363
+
364
+ # Download a model
365
+ wget -O /models/qwen.gguf [model-url]
366
+ ```
367
+
368
+ #### 3. "llama-cpp-python not found"
369
+
370
+ **Problem**: Python dependency missing.
371
+
372
+ **Solutions**:
373
+ ```bash
374
+ # Install llama-cpp-python
375
+ pip install llama-cpp-python
376
+
377
+ # With hardware acceleration (if supported)
378
+ pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
379
+
380
+ # Verify installation
381
+ python -c "import llama_cpp; print('llama-cpp-python installed')"
382
+ ```
383
+
384
+ #### 4. "Permission denied" errors
385
+
386
+ **Problem**: File permission issues.
387
+
388
+ **Solutions**:
389
+ ```bash
390
+ # Fix models directory permissions
391
+ sudo chmod 755 /models
392
+ sudo chown $USER:$USER /models/*.gguf
393
+
394
+ # Fix script permissions
395
+ chmod +x lib/pedicab.sh
396
+ ```
397
+
398
+ #### 5. "Out of memory" errors
399
+
400
+ **Problem**: Model too large for available RAM.
401
+
402
+ **Solutions**:
403
+ ```bash
404
+ # Use a smaller model
405
+ export MODEL="gemma" # Instead of large models
406
+
407
+ # Or quantize more aggressively (download Q2 or Q3 versions)
408
+
409
+ # Monitor memory usage
410
+ free -h
411
+ ```
412
+
413
+ #### 6. Slow responses
414
+
415
+ **Problem**: Model inference is slow.
416
+
417
+ **Solutions**:
418
+ ```bash
419
+ # Check system resources
420
+ top
421
+ htop
422
+
423
+ # Use smaller/faster model
424
+ export MODEL="phi-3-mini"
425
+
426
+ # Enable GPU acceleration (if available)
427
+ pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
428
+ ```
429
+
430
+ ### Debug Mode
431
+
432
+ Enable verbose debugging to diagnose issues:
433
+
434
+ ```bash
435
+ export DEBUG=2
436
+ ruby your_script.rb
437
+ ```
438
+
439
+ Debug output includes:
440
+ - Action descriptions
441
+ - Processing times
442
+ - Full state information
443
+ - Error stack traces
444
+ - Communication protocol details
445
+
446
+ ### Log Collection
447
+
448
+ For bug reports, collect this information:
449
+
450
+ ```bash
451
+ # System info
452
+ ruby -v
453
+ python -c "import llama_cpp; print(llama_cpp.__version__)"
454
+
455
+ # Environment
456
+ env | grep -E "(MODEL|DEBUG|PEDICAB)"
457
+
458
+ # Models
459
+ ls -la /models/
460
+
461
+ # Test with debugging
462
+ DEBUG=2 ruby -e "require 'pedicab'; puts Pedicab['test']['Hello']"
463
+ ```
464
+
465
+ ---
466
+
467
+ ## Platform-Specific Notes
468
+
469
+ ### Linux
470
+
471
+ #### Ubuntu/Debian
472
+ ```bash
473
+ # Install Ruby
474
+ sudo apt update
475
+ sudo apt install ruby ruby-dev build-essential
476
+
477
+ # Install Python
478
+ sudo apt install python3 python3-pip
479
+
480
+ # Install dependencies
481
+ sudo gem install pedicab
482
+ pip3 install llama-cpp-python
483
+ ```
484
+
485
+ #### CentOS/RHEL/Fedora
486
+ ```bash
487
+ # Ruby
488
+ sudo dnf install ruby ruby-devel
489
+
490
+ # Python
491
+ sudo dnf install python3 python3-pip
492
+
493
+ # Development tools
494
+ sudo dnf groupinstall "Development Tools"
495
+ ```
496
+
497
+ ### macOS
498
+
499
+ #### Using Homebrew
500
+ ```bash
501
+ # Install Ruby (if system version is too old)
502
+ brew install ruby
503
+
504
+ # Install Python
505
+ brew install python3
506
+
507
+ # Install gems
508
+ gem install pedicab
509
+
510
+ # Install Python packages
511
+ pip3 install llama-cpp-python
512
+ ```
513
+
514
+ #### Using MacPorts
515
+ ```bash
516
+ # Install Ruby
517
+ sudo port install ruby27
518
+
519
+ # Install Python
520
+ sudo port install python39
521
+
522
+ # Install packages
523
+ sudo gem install pedicab
524
+ sudo pip3.9 install llama-cpp-python
525
+ ```
526
+
527
+ ### Windows (WSL2)
528
+
529
+ #### Setup WSL2
530
+ ```powershell
531
+ # Enable WSL2
532
+ wsl --install
533
+
534
+ # Install Ubuntu (recommended)
535
+ wsl --install -d Ubuntu
536
+ ```
537
+
538
+ #### Inside WSL2 (Ubuntu)
539
+ ```bash
540
+ # Update system
541
+ sudo apt update && sudo apt upgrade -y
542
+
543
+ # Install required packages
544
+ sudo apt install ruby ruby-dev build-essential python3 python3-pip -y
545
+
546
+ # Install gems and Python packages
547
+ gem install pedicab
548
+ pip3 install llama-cpp-python
549
+ ```
550
+
551
+ ### Docker Setup
552
+
553
+ #### Dockerfile
554
+ ```dockerfile
555
+ FROM ruby:3.1
556
+
557
+ # Install system dependencies
558
+ RUN apt-get update && apt-get install -y \
559
+ python3 \
560
+ python3-pip \
561
+ build-essential \
562
+ && rm -rf /var/lib/apt/lists/*
563
+
564
+ # Install Python dependencies
565
+ RUN pip3 install llama-cpp-python
566
+
567
+ # Install Ruby gem
568
+ RUN gem install pedicab
569
+
570
+ # Create models directory
571
+ RUN mkdir -p /models
572
+
573
+ # Copy models (ADD your models here)
574
+ # COPY models/ /models/
575
+
576
+ # Set environment
577
+ ENV MODEL=qwen
578
+ ENV DEBUG=0
579
+
580
+ # Add script
581
+ COPY your_script.rb /app/
582
+ WORKDIR /app
583
+
584
+ CMD ["ruby", "your_script.rb"]
585
+ ```
586
+
587
+ #### Docker Compose
588
+ ```yaml
589
+ version: '3.8'
590
+ services:
591
+ pedicab-app:
592
+ build: .
593
+ volumes:
594
+ - ./models:/models:ro
595
+ - ./logs:/app/logs
596
+ environment:
597
+ - MODEL=qwen
598
+ - DEBUG=0
599
+ command: ruby app.rb
600
+ ```
601
+
602
+ ---
603
+
604
+ ## Advanced Configuration
605
+
606
+ ### Custom Model Directory
607
+
608
+ ```ruby
609
+ # In your Ruby code
610
+ ENV['PEDICAB_MODELS_DIR'] = '/path/to/my/models'
611
+
612
+ # Or in environment
613
+ export PEDICAB_MODELS_DIR="/path/to/my/models"
614
+ ```
615
+
616
+ ### Custom Python Backend
617
+
618
+ ```ruby
619
+ # Override Python backend path
620
+ ENV['PEDICAB_PYTHON_PATH'] = '/usr/local/bin/python3'
621
+ ```
622
+
623
+ ### Performance Tuning
624
+
625
+ #### For CPU-only systems:
626
+ ```bash
627
+ # Optimize for CPU
628
+ export OMP_NUM_THREADS=4 # Adjust based on your CPU cores
629
+ ```
630
+
631
+ #### For GPU systems:
632
+ ```bash
633
+ # Install CUDA-enabled llama-cpp-python
634
+ pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
635
+ ```
636
+
637
+ ---
638
+
639
+ ## Next Steps
640
+
641
+ After successful installation:
642
+
643
+ 1. **Read the main README** for basic usage
644
+ 2. **Check the API documentation** for detailed method reference
645
+ 3. **Try the examples** to see practical implementations
646
+ 4. **Join the community** for support and discussions
647
+
648
+ If you encounter any issues during installation, please check the troubleshooting section or create an issue on the GitHub repository.
649
+
650
+ ---
651
+
652
+ *Having trouble? Check our [GitHub Issues](https://github.com/xorgnak/pedicab/issues) for common solutions.*