groq 0.2.0 → 0.3.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8d1461971dedb839a98ceba16edeec695a3fbc48216295314e6c319e5976f621
4
- data.tar.gz: ac0437a0a14d79c9faab3c88054100928970606e90997187c6e908e67a67dc8c
3
+ metadata.gz: a27c810eca4d98436dc29bad4d48332a0f00440b3da642895d15739e43f9d9b0
4
+ data.tar.gz: 021b1ca81acc6d07d059b49b9e4a4771997fbcf2f5f8b026872a28a17052b936
5
5
  SHA512:
6
- metadata.gz: 422b5c160196127928397e568aa15e76dc4f63d1388391bce9cae4ad4d6d0b0fb4063fb52126f04a5b32667179532ea62af96f64acb785ffffed875d4c0646cb
7
- data.tar.gz: a537f489dedaa533e9fdb444c6e6d3007dab7164c8306260b0409cd5d2b8bf8802373320d7a79016f04a8e71295845f2504162aad8d7f4997db30c2cad7e32f5
6
+ metadata.gz: beafd45e9e8d1fa716f7fbe409b700d5759d55969fc35fef905aff1e02a340b969170ebc4c19ae267755df74325aa88ad9076cb799877de6497b9c2bd393b6df
7
+ data.tar.gz: f99f6a845c58eec585508f7e90bb88717152d72a5e535462e2d2a470fd1742974968dac0c901626d03acc39a4cbef5a3fe43a00c49c1a87d66ad0ea966c5bd3d
data/README.md CHANGED
@@ -69,7 +69,6 @@ If bundler is not being used to manage dependencies, install the gem by executin
69
69
  ```plain
70
70
  gem install groq
71
71
  ```
72
-
73
72
  ## Usage
74
73
 
75
74
  - Get your API key from [console.groq.com/keys](https://console.groq.com/keys)
@@ -80,13 +79,19 @@ gem install groq
80
79
  client = Groq::Client.new # uses ENV["GROQ_API_KEY"] and "llama3-8b-8192"
81
80
  client = Groq::Client.new(api_key: "...", model_id: "llama3-8b-8192")
82
81
 
83
- Groq.configuration do |config|
82
+ Groq.configure do |config|
84
83
  config.api_key = "..."
85
84
  config.model_id = "llama3-70b-8192"
86
85
  end
87
86
  client = Groq::Client.new
88
87
  ```
89
88
 
89
+ In a Rails application, you can generate a `config/initializer/groq.rb` file with:
90
+
91
+ ```plain
92
+ rails g groq:install
93
+ ```
94
+
90
95
  There is a simple chat function to send messages to a model:
91
96
 
92
97
  ```ruby
@@ -166,7 +171,7 @@ As above, you can specify the default model to use for all `chat()` calls:
166
171
  ```ruby
167
172
  client = Groq::Client.new(model_id: "llama3-70b-8192")
168
173
  # or
169
- Groq.configuration do |config|
174
+ Groq.configure do |config|
170
175
  config.model_id = "llama3-70b-8192"
171
176
  end
172
177
  ```
@@ -190,9 +195,8 @@ end
190
195
  The output might looks similar to:
191
196
 
192
197
  ```plain
193
- User message: Hello, world!
198
+ > User message: Hello, world!
194
199
  Assistant reply with model llama3-8b-8192:
195
- {"role"=>"assistant", "content"=>"Hello, world! It's great to meet you! Is there something I can help you with, or would you like to chat?"}
196
200
  Assistant reply with model llama3-70b-8192:
197
201
  {"role"=>"assistant", "content"=>"The classic \"Hello, world!\" It's great to see you here! Is there something I can help you with, or would you like to just chat?"}
198
202
  Assistant reply with model llama2-70b-4096:
@@ -227,6 +231,33 @@ JSON.parse(response["content"])
227
231
  # => {"number"=>7}
228
232
  ```
229
233
 
234
+ ### Using dry-schema with JSON mode
235
+
236
+ As a bonus, the `S` or `System` helper can take a `json_schema:` argument and the system message will include the `JSON` keyword and the formatted schema in its content.
237
+
238
+ For example, if you're using [dry-schema](https://dry-rb.org/gems/dry-schema/1.13/extensions/json_schema/) with its `:json_schema` extension you can use Ruby to describe JSON schema.
239
+
240
+ ```ruby
241
+ require "dry-schema"
242
+ Dry::Schema.load_extensions(:json_schema)
243
+
244
+ person_schema_defn = Dry::Schema.JSON do
245
+ required(:name).filled(:string)
246
+ optional(:age).filled(:integer)
247
+ optional(:email).filled(:string)
248
+ end
249
+ person_schema = person_schema_defn.json_schema
250
+
251
+ response = @client.chat([
252
+ S("You're excellent at extracting personal information", json_schema: person_schema),
253
+ U("I'm Dr Nic and I'm almost 50.")
254
+ ], json: true)
255
+ JSON.parse(response["content"])
256
+ # => {"name"=>"Dr Nic", "age"=>49}
257
+ ```
258
+
259
+ NOTE: `bin/console` already loads the `dry-schema` library and the `json_schema` extension because its handy.
260
+
230
261
  ### Tools/Functions
231
262
 
232
263
  LLMs are increasingly supporting deferring to tools or functions to fetch data, perform calculations, or store structured data. Groq Cloud in turn then supports their tool implementations through its API.
@@ -298,10 +329,10 @@ The defaults are:
298
329
  => 1
299
330
  ```
300
331
 
301
- You can override them in the `Groq.configuration` block, or with each `chat()` call:
332
+ You can override them in the `Groq.configure` block, or with each `chat()` call:
302
333
 
303
334
  ```ruby
304
- Groq.configuration do |config|
335
+ Groq.configure do |config|
305
336
  config.max_tokens = 512
306
337
  config.temperature = 0.5
307
338
  end
@@ -309,6 +340,273 @@ end
309
340
  @client.chat("Hello, world!", max_tokens: 512, temperature: 0.5)
310
341
  ```
311
342
 
343
+ ### Debugging API calls
344
+
345
+ The underlying HTTP library being used is faraday, and you can enabled debugging, or configure other faraday internals by passing a block to the `Groq::Client.new` constructor.
346
+
347
+ ```ruby
348
+ require 'logger'
349
+
350
+ # Create a logger instance
351
+ logger = Logger.new(STDOUT)
352
+ logger.level = Logger::DEBUG
353
+
354
+ @client = Groq::Client.new do |faraday|
355
+ # Log request and response bodies
356
+ faraday.response :logger, logger, bodies: true
357
+ end
358
+ ```
359
+
360
+ If you pass `--debug` to `bin/console` you will have this logger setup for you.
361
+
362
+ ```plain
363
+ bin/console --debug
364
+ ```
365
+
366
+ ### Streaming
367
+
368
+ If your AI assistant responses are being telecast live to a human, then that human might want some progressive responses. The Groq API supports streaming responses.
369
+
370
+ Pass a block to `chat()` with either one or two arguments.
371
+
372
+ 1. The first argument is the string content chunk of the response.
373
+ 2. The optional second argument is the full response object from the API containing extra metadata.
374
+
375
+ The final block call will be the last chunk of the response:
376
+
377
+ 1. The first argument will be `nil`
378
+ 2. The optional second argument, the full response object, contains a summary of the Groq API usage, such as prompt tokens, prompt time, etc.
379
+
380
+ ```ruby
381
+ puts "🍕 "
382
+ messages = [
383
+ S("You are a pizza sales person."),
384
+ U("What do you sell?")
385
+ ]
386
+ @client.chat(messages) do |content|
387
+ print content
388
+ end
389
+ puts
390
+ ```
391
+
392
+ Each chunk of the response will be printed to the console as it is received. It will look pretty.
393
+
394
+ The default `llama3-7b-8192` model is very very fast and you might not see any streaming. Try a slower model like `llama3-70b-8192` or `mixtral-8x7b-32768`.
395
+
396
+ ```ruby
397
+ @client = Groq::Client.new(model_id: "llama3-70b-8192")
398
+ @client.chat("Write a long poem about patience") do |content|
399
+ print content
400
+ end
401
+ puts
402
+ ```
403
+
404
+ You can pass in a second argument to get the full response JSON object:
405
+
406
+ ```ruby
407
+ @client.chat("Write a long poem about patience") do |content, response|
408
+ pp content
409
+ pp response
410
+ end
411
+ ```
412
+
413
+ Alternately, you can pass a `Proc` or any object that responds to `call` via a `stream:` keyword argument:
414
+
415
+ ```ruby
416
+ @client.chat("Write a long poem about patience", stream: ->(content) { print content })
417
+ ```
418
+
419
+ You could use a class with a `call` method with either one or two arguments, like the `Proc` discussion above.
420
+
421
+ ```ruby
422
+ class MessageBits
423
+ def initialize(emoji)
424
+ print "#{emoji} "
425
+ @bits = []
426
+ end
427
+
428
+ def call(content)
429
+ if content.nil?
430
+ puts
431
+ else
432
+ print(content)
433
+ @bits << content
434
+ end
435
+ end
436
+
437
+ def to_s
438
+ @bits.join("")
439
+ end
440
+
441
+ def to_assistant_message
442
+ Assistant(to_s)
443
+ end
444
+ end
445
+
446
+ bits = MessageBits.new("🍕")
447
+ @client.chat("Write a long poem about pizza", stream: bits)
448
+ ```
449
+
450
+ ## Examples
451
+
452
+ Here are some example uses of Groq, of the `groq` gem and its syntax.
453
+
454
+ Also, see the [`examples/`](examples/) folder for more example apps.
455
+
456
+ ### Pizzeria agent
457
+
458
+ Talking with a pizzeria.
459
+
460
+ Our pizzeria agent can be as simple as a function that combines a system message and the current messages array:
461
+
462
+ ```ruby
463
+ @agent_message = <<~EOS
464
+ You are an employee at a pizza store.
465
+
466
+ You sell hawaiian, and pepperoni pizzas; in small and large sizes for $10, and $20 respectively.
467
+
468
+ Pick up only in. Ready in 10 mins. Cash on pickup.
469
+ EOS
470
+
471
+ def chat_pizza_agent(messages)
472
+ @client.chat([
473
+ System(@agent_message),
474
+ *messages
475
+ ])
476
+ end
477
+ ```
478
+
479
+ Now for our first interaction:
480
+
481
+ ```ruby
482
+ messages = [U("Is this the pizza shop? Do you sell hawaiian?")]
483
+
484
+ response = chat_pizza_agent(messages)
485
+ puts response["content"]
486
+ ```
487
+
488
+ The output might be:
489
+
490
+ > Yeah! This is the place! Yes, we sell Hawaiian pizzas here! We've got both small and large sizes available for you. The small Hawaiian pizza is $10, and the large one is $20. Plus, because we're all about getting you your pizza fast, our pick-up time is only 10 minutes! So, what can I get for you today? Would you like to order a small or large Hawaiian pizza?
491
+
492
+ Continue with user's reply.
493
+
494
+ Note, we build the `messages` array with the previous user and assistant messages and the new user message:
495
+
496
+ ```ruby
497
+ messages << response << U("Yep, give me a large.")
498
+ response = chat_pizza_agent(messages)
499
+ puts response["content"]
500
+ ```
501
+
502
+ Response:
503
+
504
+ > I'll get that ready for you. So, to confirm, you'd like to order a large Hawaiian pizza for $20, and I'll have it ready for you in 10 minutes. When you come to pick it up, please have the cash ready as we're a cash-only transaction. See you in 10!
505
+
506
+ Making a change:
507
+
508
+ ```ruby
509
+ messages << response << U("Actually, make it two smalls.")
510
+ response = chat_pizza_agent(messages)
511
+ puts response["content"]
512
+ ```
513
+
514
+ Response:
515
+
516
+ > I've got it! Two small Hawaiian pizzas on the way! That'll be $20 for two small pizzas. Same deal, come back in 10 minutes to pick them up, and bring cash for the payment. See you soon!
517
+
518
+ ### Pizza customer agent
519
+
520
+ Oh my. Let's also have an agent that represents the customer.
521
+
522
+ ```ruby
523
+ @customer_message = <<~EOS
524
+ You are a customer at a pizza store.
525
+
526
+ You want to order a pizza. You can ask about the menu, prices, sizes, and pickup times.
527
+
528
+ You'll agree with the price and terms of the pizza order.
529
+
530
+ You'll make a choice of the available options.
531
+
532
+ If you're first in the conversation, you'll say hello and ask about the menu.
533
+ EOS
534
+
535
+ def chat_pizza_customer(messages)
536
+ @client.chat([
537
+ System(@customer_message),
538
+ *messages
539
+ ])
540
+ end
541
+ ```
542
+
543
+ First interaction starts with no user or assistant messages. We're generating the customer's first message:
544
+
545
+ ```ruby
546
+ customer_messages = []
547
+ response = chat_pizza_customer(customer_messages)
548
+ puts response["content"]
549
+ ```
550
+
551
+ Customer's first message:
552
+
553
+ > Hello! I'd like to order a pizza. Could you tell me more about the menu and prices? What kind of pizzas do you have available?
554
+
555
+ Now we need to pass this to the pizzeria agent:
556
+
557
+ ```ruby
558
+ customer_message = response["content"]
559
+ pizzeria_messages = [U(customer_message)]
560
+ response = chat_pizza_agent(pizzeria_messages)
561
+ puts response["content"]
562
+ ```
563
+
564
+ Pizzeria agent response:
565
+
566
+ > Hi there! Yeah, sure thing! We've got two delicious options to choose from: Hawaiian and Pepperoni. Both come in small and large sizes. The small pizzas are $10 and the large pizzas are $20.
567
+ >
568
+ > Our Hawaiian pizza features fresh ham and pineapple on a bed of melted mozzarella. And if you're in the mood for something classic, our Pepperoni pizza is loaded with plenty of sliced pepperoni and melted mozzarella cheese.
569
+
570
+ Now let's add this response to the customer agent's message array, and generate the customer's next response to the pizzera:
571
+
572
+ ```ruby
573
+ customer_messages << U(response["content"])
574
+ response = chat_pizza_customer(customer_messages)
575
+ puts response["content"]
576
+ ```
577
+
578
+ Customer agent response:
579
+
580
+ > Wow, those both sound delicious! I'm intrigued by the Hawaiian combo, I never thought of putting ham and pineapple on a pizza before. How would you recommend I customize it? Can I add any extra toppings or keep it as is? And do you have any recommendations for the size? Small or large?
581
+
582
+ Add this to the pizzeria agent's message array, and generate the pizzeria's response:
583
+
584
+ ```ruby
585
+ pizzeria_messages << U(response["content"])
586
+ response = chat_pizza_agent(pizzeria_messages)
587
+ puts response["content"]
588
+ ```
589
+
590
+ Pizzeria agent response:
591
+
592
+ > The Hawaiian pizza is definitely a unique twist on traditional toppings! You can definitely customize it to your liking. We allow two extra toppings of your choice for an additional $1 each. If you want to add any other toppings beyond that, it's $2 per topping.
593
+ >
594
+ > As for recommends, I'd say the small size is a great starting point, especially if you're trying something new like the Hawaiian pizza. The small size is $10 and it's a great bite-sized portion. But if you're looking for a bigger pie, the large size is $20 and would be a great option if you're feeding a crowd or want leftovers.
595
+ >
596
+ > Keep in mind that our pizzas are cooked fresh in 10 minutes, so it's ready when it's ready! Would you like to place an order now?
597
+
598
+ Will the customer actually buy anything now?
599
+
600
+ > I think I'd like to go with the Hawaiian pizza in the small size, so the total would be $10. And I'll take advantage of the extra topping option. I think I'll add some mushrooms to it. So, that's an extra $1 for the mushroom topping. Would that be $11 total? And do you have a pickup time available soon?
601
+
602
+ OMG, the customer bought something.
603
+
604
+ Pizzeria agent response:
605
+
606
+ > That sounds like a great choice! Yeah, the total would be $11, the small Hawaiian pizza with mushrooms. And yes, we do have pickup available shortly. It'll be ready in about 10 minutes. Cash on pickup, okay? Would you like to pay when you pick up your pizza?
607
+
608
+ Maybe these two do not know how to stop talking. The Halting Problem exists in pizza shops too.
609
+
312
610
  ## Development
313
611
 
314
612
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
@@ -0,0 +1,120 @@
1
+ # Examples
2
+
3
+ ## User Chat
4
+
5
+ Chat with a pre-defined agent using the following command:
6
+
7
+ ```bash
8
+ bundle exec examples/user-chat.rb
9
+ # or
10
+ bundle exec examples/user-chat.rb --agent-prompt examples/agent-prompts/helloworld.yml
11
+ ```
12
+
13
+ There are two example agent prompts available:
14
+
15
+ - `examples/agent-prompts/helloworld.yml` (the default)
16
+ - `examples/agent-prompts/pizzeria-sales.yml`
17
+
18
+ At the prompt, either talk to the AI agent, or some special commands:
19
+
20
+ - `exit` to exit the conversation
21
+ - `summary` to get a summary of the conversation so far
22
+
23
+ ### Streaming text chunks
24
+
25
+ There is also an example of streaming the conversation to terminal as it is received from Groq API.
26
+
27
+ It defaults to the slower `llama3-70b-8192` model so that the streaming is more noticable.
28
+
29
+ ```bash
30
+ bundle exec examples/user-chat-streaming.rb --agent-prompt examples/agent-prompts/pizzeria-sales.yml
31
+ ```
32
+
33
+ ### Streaming useful chunks (e.g. JSON)
34
+
35
+ If the response is returning a list of objects, such as a sequence of JSON objects, you can try to stream the chunks that make up the JSON objects and process them as soon as they are complete.
36
+
37
+ ```bash
38
+ bundle exec examples/streaming-to-json-objects.rb
39
+ ```
40
+
41
+ This will produce JSON for each planet in the solar system, one at a time. The API does not return each JSON as a chunk, rather it only returns `{` and `"` and `name` as distinct chunks. But the example code [`examples/streaming-to-json-objects.rb`](examples/streaming-to-json-objects.rb) shows how you might build up JSON objects from chunks, and process it (e.g. store to DB) as soon as it is complete.
42
+
43
+ The system prompt used is:
44
+
45
+ ```plain
46
+ Write out the names of the planets of our solar system, and a brief description of each one.
47
+
48
+ Return JSON object for each one:
49
+
50
+ { "name": "Mercury", "position": 1, "description": "Mercury is ..." }
51
+
52
+ Between each response, say "NEXT" to clearly delineate each JSON response.
53
+
54
+ Don't say anything else except the JSON objects above.
55
+ ```
56
+
57
+ The code in the repo uses the `NEXT` token to know when to process the JSON object.
58
+
59
+ The output will look like, with each JSON object printed (or saved to DB) obly when it has been completely built from chunks.
60
+
61
+ ```json
62
+ {"name":"Mercury","position":1,"description":"Mercury is the smallest planet in our solar system, with a highly elliptical orbit that takes it extremely close to the sun."}
63
+ {"name":"Venus","position":2,"description":"Venus is often called Earth's twin due to their similar size and mass, but it has a thick atmosphere that traps heat, making it the hottest planet."}
64
+ ```
65
+
66
+ ### Pizzeria
67
+
68
+ Run the pizzeria example with the following command:
69
+
70
+ ```bash
71
+ bundle exec examples/user-chat.rb --agent-prompt examples/agent-prompts/pizzeria-sales.yml
72
+ ```
73
+
74
+ > 🍕 Hello! Thank you for calling our pizzeria. I'm happy to help you with your inquiry. Do you have a question about our menu or would you like to place an order?
75
+ >
76
+ > 😋 What's your cheapest?
77
+ >
78
+ > 🍕 Our cheapest menu item is the Garlic Knots (6 pieces), which is served with marinara sauce and priced at $5. Would you like to add it to your order or is there something else that catches your attention?
79
+ >
80
+ > 😋 Yes. And a cheap pizza.
81
+ >
82
+ > 🍕 Our cheapest pizza option is the Veggie Pizza. It comes with tomato sauce, mozzarella, and a mix of bell peppers, onions, and mushrooms, all for $13. Would you like to order the Veggie Pizza and the Garlic Knots together?
83
+ >
84
+ > 😋 Yep
85
+ >
86
+ > 🍕 I've taken note of your order. You're ordering a Veggie Pizza and 6 Garlic Knots. Your total comes out to be $18. Would you like to pay by credit card, cash, or take any other payment method?
87
+ >
88
+ > 😋 summary
89
+ >
90
+ > Here is a summary of the current conversation:
91
+ >
92
+ > The conversation started with a customer calling the pizzeria and speaking with an AI assistant. The assistant offered to help with menu inquiries or taking an order. The customer was considering ordering from the menu. The assistant presented the cheapest menu item, Garlic Knots, and asked if the customer wanted to add it to their order. The customer may have been interested in other options as well. The assistant then presented the cheapest pizza option, the Veggie Pizza, and asked if the customer wanted to order it along with the Garlic Knots. The customer agreed, and the assistant took note of the order, which consisted of a Veggie Pizza and 6 Garlic Knots for a total of $18. The assistant asked how the customer would like to pay for their order.
93
+ >
94
+ > 😋 exit
95
+
96
+ ## Two Agents
97
+
98
+ Here is an example of two agents talking to each other, without any user input.
99
+
100
+ ### Buying Pizza
101
+
102
+ ```bash
103
+ bundle exec examples/groq-two-agents-chatting.rb --agent-prompt examples/agent-prompts/pizzeria-sales.yml --agent-prompt examples/agent-prompts/food-customer.yml
104
+ ```
105
+
106
+ This will proceed for 10 turns, with the agents taking turns to speak. Pass the `-i 15` option to increase the number of turns.
107
+
108
+ > 🍕 Hello! Welcome to Pizza Palace. Thank you for reaching out to us. How can I assist you today?
109
+ >
110
+ > 😋 Hi! Thanks for having me. I'm actually pretty hungry, so I'm looking to order some food. Can you tell me a bit about your menu? What are some of your popular items?
111
+ >
112
+ > 🍕 I'd be happy to tell you more about our menu. We have a variety of delicious options to choose from. Our most popular items include our Margherita Pizza, Pepperoni Pizza, and BBQ Chicken Pizza. The Margherita is a classic with tomato sauce, mozzarella, and fresh basil. The Pepperoni Pizza is a crowd-pleaser with a generous layer of pepperoni on top. And our BBQ Chicken Pizza has a sweet and tangy BBQ sauce, topped with chicken, onions, and cilantro.
113
+ >
114
+ > We also have some great non-pizza options, such as our Garlic Knots, which are a favorite among our customers. And for dessert, our Cannoli are a must-try - they're filled with creamy ricotta cheese and chocolate chips.
115
+ >
116
+ > What sounds good to you? Would you like me to walk you through our entire menu or is there something specific you're in the mood for?
117
+ >
118
+ > 😋 Mmm, everything sounds delicious! I think I'll go for something a bit hearty. Can you tell me more about the BBQ Chicken Pizza? What kind of chicken is used? And is the pepperoni on the Pepperoni Pizza thick-cut or thin-cut?
119
+ >
120
+ > Also, how would you recommend ordering the Garlic Knots? Are they a side dish or can I get them as part of a combo?
@@ -0,0 +1,12 @@
1
+ ---
2
+ name: "Food Customer"
3
+ system_prompt: |-
4
+ You are a hungry customer looking to order some food.
5
+
6
+ You can ask about the menu, place an order, or inquire about delivery options.
7
+
8
+ When asked about delivery, you say you'll pick up.
9
+ When asked about payment, you confirm you'll pay when you pick up.
10
+ You have $25 to spend.
11
+ agent_emoji: "😋"
12
+ can_go_first: true
@@ -0,0 +1,7 @@
1
+ ---
2
+ name: "Hello World"
3
+ system_prompt: |-
4
+ I am a friendly agent who always replies to any prompt
5
+ with a pleasant "Hello" and wishing them well.
6
+ agent_emoji: "🤖"
7
+ user_emoji: "👤"
@@ -0,0 +1,20 @@
1
+ ---
2
+ name: "Pizzeria Sales"
3
+ system_prompt: |-
4
+ You are a phone operator at a busy pizzeria. Your responsibilities include answering calls and online chats from customers who may ask about the menu, wish to place or change orders, or inquire about opening hours.
5
+
6
+ Here are some of our popular menu items:
7
+
8
+ <menu>
9
+ Margherita Pizza: Classic with tomato sauce, mozzarella, and basil - $12
10
+ Pepperoni Pizza: Tomato sauce, mozzarella, and a generous layer of pepperoni - $14
11
+ Veggie Pizza: Tomato sauce, mozzarella, and a mix of bell peppers, onions, and mushrooms - $13
12
+ BBQ Chicken Pizza: BBQ sauce, chicken, onions, and cilantro - $15
13
+ Garlic Knots (6 pieces): Served with marinara sauce - $5
14
+ Cannoli: Classic Sicilian dessert filled with sweet ricotta cream - $4 each
15
+ </menu>
16
+
17
+ Your goal is to provide accurate information, confirm order details, and ensure a pleasant customer experience. Please maintain a polite and professional tone, be prompt in your responses, and ensure accuracy in order transmission.
18
+ agent_emoji: "🍕"
19
+ user_emoji: "😋"
20
+ can_go_first: true
@@ -0,0 +1,124 @@
1
+ #!/usr/bin/env ruby
2
+ #
3
+ # This is a variation of groq-user-chat.rb but without any user prompting.
4
+ # Just two agents chatting with each other.
5
+
6
+ require "optparse"
7
+ require "groq"
8
+ require "yaml"
9
+
10
+ include Groq::Helpers
11
+
12
+ @options = {
13
+ model: "llama3-8b-8192",
14
+ # model: "llama3-70b-8192",
15
+ agent_prompt_paths: [],
16
+ timeout: 20,
17
+ interaction_count: 10 # total count of interactions between agents
18
+ }
19
+ OptionParser.new do |opts|
20
+ opts.banner = "Usage: ruby script.rb [options]"
21
+
22
+ opts.on("-m", "--model MODEL", "Model name") do |v|
23
+ @options[:model] = v
24
+ end
25
+
26
+ opts.on("-a", "--agent-prompt PATH", "Path to an agent prompt file") do |v|
27
+ @options[:agent_prompt_paths] << v
28
+ end
29
+
30
+ opts.on("-t", "--timeout TIMEOUT", "Timeout in seconds") do |v|
31
+ @options[:timeout] = v.to_i
32
+ end
33
+
34
+ opts.on("-d", "--debug", "Enable debug mode") do |v|
35
+ @options[:debug] = v
36
+ end
37
+
38
+ opts.on("-i", "--interaction-count COUNT", "Total count of interactions between agents") do |v|
39
+ @options[:interaction_count] = v.to_i
40
+ end
41
+ end.parse!
42
+
43
+ raise "New two --agent-prompt paths" if @options[:agent_prompt_paths]&.length&.to_i != 2
44
+
45
+ def debug?
46
+ @options[:debug]
47
+ end
48
+
49
+ # Will be instantiated from the agent prompt file
50
+ class Agent
51
+ def initialize(args = {})
52
+ args.each do |k, v|
53
+ instance_variable_set(:"@#{k}", v)
54
+ end
55
+ @messages = [S(@system_prompt)]
56
+ end
57
+ attr_reader :messages
58
+ attr_reader :name, :can_go_first, :user_emoji, :agent_emoji, :system_prompt
59
+ def can_go_first?
60
+ @can_go_first
61
+ end
62
+
63
+ def self.load_from_file(path)
64
+ new(YAML.load_file(path))
65
+ end
66
+ end
67
+
68
+ # Read the agent prompt from the file
69
+ agents = @options[:agent_prompt_paths].map do |agent_prompt_path|
70
+ Agent.load_from_file(agent_prompt_path)
71
+ end
72
+ go_first = agents.find { |agent| agent.can_go_first? } || agents.first
73
+
74
+ # check that each agent contains a system prompt
75
+ agents.each do |agent|
76
+ raise "Agent #{agent.name} is missing a system prompt" if agent.system_prompt.nil?
77
+ end
78
+
79
+ # Initialize the Groq client
80
+ @client = Groq::Client.new(model_id: @options[:model], request_timeout: @options[:timeout]) do |f|
81
+ if debug?
82
+ require "logger"
83
+
84
+ # Create a logger instance
85
+ logger = Logger.new($stdout)
86
+ logger.level = Logger::DEBUG
87
+
88
+ f.response :logger, logger, bodies: true # Log request and response bodies
89
+ end
90
+ end
91
+
92
+ puts "Welcome to a conversation between #{agents.map(&:name).join(", ")}. Our first speaker will be #{go_first.name}."
93
+ puts "You can quit by typing 'exit'."
94
+
95
+ agent_speaking_index = agents.index(go_first)
96
+ loop_count = 0
97
+
98
+ loop do
99
+ speaking_agent = agents[agent_speaking_index]
100
+ # Show speaking agent emoji immediately to indicate request going to Groq API
101
+ print("#{speaking_agent.agent_emoji} ")
102
+
103
+ # Use Groq to generate a response
104
+ response = @client.chat(speaking_agent.messages)
105
+
106
+ # Finish the speaking agent line on screen with message response
107
+ puts(message = response.dig("content"))
108
+
109
+ # speaking agent tracks its own message as the Assistant
110
+ speaking_agent.messages << A(message)
111
+
112
+ # other agent tracks the message as the User
113
+ other_agents = agents.reject { |agent| agent == speaking_agent }
114
+ other_agents.each do |agent|
115
+ agent.messages << U(message)
116
+ end
117
+
118
+ agent_speaking_index = (agent_speaking_index + 1) % agents.length
119
+ loop_count += 1
120
+ break if loop_count > @options[:interaction_count]
121
+ rescue Faraday::TooManyRequestsError
122
+ warn "...\n\nGroq API error: too many requests. Exiting."
123
+ exit 1
124
+ end
@@ -0,0 +1,87 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "optparse"
4
+ require "groq"
5
+ require "yaml"
6
+
7
+ include Groq::Helpers
8
+
9
+ @options = {
10
+ model: "llama3-70b-8192",
11
+ timeout: 20
12
+ }
13
+ OptionParser.new do |opts|
14
+ opts.banner = "Usage: ruby script.rb [options]"
15
+
16
+ opts.on("-m", "--model MODEL", "Model name") do |v|
17
+ @options[:model] = v
18
+ end
19
+
20
+ opts.on("-t", "--timeout TIMEOUT", "Timeout in seconds") do |v|
21
+ @options[:timeout] = v.to_i
22
+ end
23
+
24
+ opts.on("-d", "--debug", "Enable debug mode") do |v|
25
+ @options[:debug] = v
26
+ end
27
+ end.parse!
28
+
29
+ raise "Missing --model option" if @options[:model].nil?
30
+
31
+ # Initialize the Groq client
32
+ @client = Groq::Client.new(model_id: @options[:model], request_timeout: @options[:timeout]) do |f|
33
+ if @options[:debug]
34
+ require "logger"
35
+
36
+ # Create a logger instance
37
+ logger = Logger.new($stdout)
38
+ logger.level = Logger::DEBUG
39
+
40
+ f.response :logger, logger, bodies: true # Log request and response bodies
41
+ end
42
+ end
43
+
44
+ prompt = <<~TEXT
45
+ Write out the names of the planets of our solar system, and a brief description of each one.
46
+
47
+ Return JSON object for each one:
48
+
49
+ { "name": "Mercury", "position": 1, "description": "Mercury is ..." }
50
+
51
+ Between each response, say "NEXT" to clearly delineate each JSON response.
52
+
53
+ Don't say anything else except the JSON objects above.
54
+ TEXT
55
+
56
+ # Handle each JSON object once it has been fully streamed
57
+
58
+ class PlanetStreamer
59
+ def initialize
60
+ @buffer = ""
61
+ end
62
+
63
+ def call(content)
64
+ if !content || content.include?("NEXT")
65
+ json = JSON.parse(@buffer)
66
+
67
+ # do something with JSON, e.g. save to database
68
+ puts json.to_json
69
+
70
+ # reset buffer
71
+ @buffer = ""
72
+ return
73
+ end
74
+ # if @buffer is empty; and content is not JSON start {, then ignore + return
75
+ if @buffer.empty? && !content.start_with?("{")
76
+ return
77
+ end
78
+
79
+ # build JSON
80
+ @buffer << content
81
+ end
82
+ end
83
+
84
+ streamer = PlanetStreamer.new
85
+
86
+ @client.chat([S(prompt)], stream: streamer)
87
+ puts
@@ -0,0 +1,128 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "optparse"
4
+ require "groq"
5
+ require "yaml"
6
+
7
+ include Groq::Helpers
8
+
9
+ @options = {
10
+ model: "llama3-70b-8192",
11
+ agent_prompt_path: File.join(File.dirname(__FILE__), "agent-prompts/helloworld.yml"),
12
+ timeout: 20
13
+ }
14
+ OptionParser.new do |opts|
15
+ opts.banner = "Usage: ruby script.rb [options]"
16
+
17
+ opts.on("-m", "--model MODEL", "Model name") do |v|
18
+ @options[:model] = v
19
+ end
20
+
21
+ opts.on("-a", "--agent-prompt PATH", "Path to agent prompt file") do |v|
22
+ @options[:agent_prompt_path] = v
23
+ end
24
+
25
+ opts.on("-t", "--timeout TIMEOUT", "Timeout in seconds") do |v|
26
+ @options[:timeout] = v.to_i
27
+ end
28
+
29
+ opts.on("-d", "--debug", "Enable debug mode") do |v|
30
+ @options[:debug] = v
31
+ end
32
+ end.parse!
33
+
34
+ raise "Missing --model option" if @options[:model].nil?
35
+ raise "Missing --agent-prompt option" if @options[:agent_prompt_path].nil?
36
+
37
+ # Read the agent prompt from the file
38
+ agent_prompt = YAML.load_file(@options[:agent_prompt_path])
39
+ user_emoji = agent_prompt["user_emoji"]
40
+ agent_emoji = agent_prompt["agent_emoji"]
41
+ system_prompt = agent_prompt["system_prompt"] || agent_prompt["system"]
42
+ can_go_first = agent_prompt["can_go_first"]
43
+
44
+ # Initialize the Groq client
45
+ @client = Groq::Client.new(model_id: @options[:model], request_timeout: @options[:timeout]) do |f|
46
+ if @options[:debug]
47
+ require "logger"
48
+
49
+ # Create a logger instance
50
+ logger = Logger.new($stdout)
51
+ logger.level = Logger::DEBUG
52
+
53
+ f.response :logger, logger, bodies: true # Log request and response bodies
54
+ end
55
+ end
56
+
57
+ puts "Welcome to the AI assistant! I'll respond to your queries."
58
+ puts "You can quit by typing 'exit'."
59
+
60
+ def produce_summary(messages)
61
+ combined = messages.map do |message|
62
+ if message["role"] == "user"
63
+ "User: #{message["content"]}"
64
+ else
65
+ "Assistant: #{message["content"]}"
66
+ end
67
+ end.join("\n")
68
+ response = @client.chat([
69
+ S("You are excellent at reading a discourse between a human and an AI assistant and summarising the current conversation."),
70
+ U("Here is the current conversation:\n\n------\n\n#{combined}")
71
+ ])
72
+ puts response["content"]
73
+ end
74
+
75
+ messages = [S(system_prompt)]
76
+
77
+ if can_go_first
78
+ print "#{agent_emoji} "
79
+ message_bits = []
80
+ response = @client.chat(messages) do |content|
81
+ # content == nil on last message; and "" on first message
82
+ next unless content
83
+ print(content)
84
+ message_bits << content
85
+ end
86
+ puts
87
+ messages << A(message_bits.join(""))
88
+ end
89
+
90
+ class MessageBits
91
+ def initialize(emoji)
92
+ print "#{emoji} "
93
+ @bits = []
94
+ end
95
+
96
+ def call(content)
97
+ if content.nil?
98
+ puts
99
+ else
100
+ print(content)
101
+ @bits << content
102
+ end
103
+ end
104
+
105
+ def to_assistant_message
106
+ Assistant(@bits.join(""))
107
+ end
108
+ end
109
+
110
+ loop do
111
+ print "#{user_emoji} "
112
+ user_input = gets.chomp
113
+
114
+ break if user_input.downcase == "exit"
115
+
116
+ # produce summary
117
+ if user_input.downcase == "summary"
118
+ produce_summary(messages)
119
+ next
120
+ end
121
+
122
+ messages << U(user_input)
123
+
124
+ # Use Groq to generate a response
125
+ message_bits = MessageBits.new(agent_emoji)
126
+ @client.chat(messages, stream: message_bits)
127
+ messages << message_bits.to_assistant_message
128
+ end
@@ -0,0 +1,105 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "optparse"
4
+ require "groq"
5
+ require "yaml"
6
+
7
+ include Groq::Helpers
8
+
9
+ @options = {
10
+ model: "llama3-8b-8192",
11
+ # model: "llama3-70b-8192",
12
+ agent_prompt_path: File.join(File.dirname(__FILE__), "agent-prompts/helloworld.yml"),
13
+ timeout: 20
14
+ }
15
+ OptionParser.new do |opts|
16
+ opts.banner = "Usage: ruby script.rb [options]"
17
+
18
+ opts.on("-m", "--model MODEL", "Model name") do |v|
19
+ @options[:model] = v
20
+ end
21
+
22
+ opts.on("-a", "--agent-prompt PATH", "Path to agent prompt file") do |v|
23
+ @options[:agent_prompt_path] = v
24
+ end
25
+
26
+ opts.on("-t", "--timeout TIMEOUT", "Timeout in seconds") do |v|
27
+ @options[:timeout] = v.to_i
28
+ end
29
+
30
+ opts.on("-d", "--debug", "Enable debug mode") do |v|
31
+ @options[:debug] = v
32
+ end
33
+ end.parse!
34
+
35
+ raise "Missing --model option" if @options[:model].nil?
36
+ raise "Missing --agent-prompt option" if @options[:agent_prompt_path].nil?
37
+
38
+ # Read the agent prompt from the file
39
+ agent_prompt = YAML.load_file(@options[:agent_prompt_path])
40
+ user_emoji = agent_prompt["user_emoji"]
41
+ agent_emoji = agent_prompt["agent_emoji"]
42
+ system_prompt = agent_prompt["system_prompt"] || agent_prompt["system"]
43
+ can_go_first = agent_prompt["can_go_first"]
44
+
45
+ # Initialize the Groq client
46
+ @client = Groq::Client.new(model_id: @options[:model], request_timeout: @options[:timeout]) do |f|
47
+ if @options[:debug]
48
+ require "logger"
49
+
50
+ # Create a logger instance
51
+ logger = Logger.new($stdout)
52
+ logger.level = Logger::DEBUG
53
+
54
+ f.response :logger, logger, bodies: true # Log request and response bodies
55
+ end
56
+ end
57
+
58
+ puts "Welcome to the AI assistant! I'll respond to your queries."
59
+ puts "You can quit by typing 'exit'."
60
+
61
+ def produce_summary(messages)
62
+ combined = messages.map do |message|
63
+ if message["role"] == "user"
64
+ "User: #{message["content"]}"
65
+ else
66
+ "Assistant: #{message["content"]}"
67
+ end
68
+ end.join("\n")
69
+ response = @client.chat([
70
+ S("You are excellent at reading a discourse between a human and an AI assistant and summarising the current conversation."),
71
+ U("Here is the current conversation:\n\n------\n\n#{combined}")
72
+ ])
73
+ puts response["content"]
74
+ end
75
+
76
+ messages = [S(system_prompt)]
77
+
78
+ if can_go_first
79
+ response = @client.chat(messages)
80
+ puts "#{agent_emoji} #{response["content"]}"
81
+ messages << response
82
+ end
83
+
84
+ loop do
85
+ print "#{user_emoji} "
86
+ user_input = gets.chomp
87
+
88
+ break if user_input.downcase == "exit"
89
+
90
+ # produce summary
91
+ if user_input.downcase == "summary"
92
+ produce_summary(messages)
93
+ next
94
+ end
95
+
96
+ messages << U(user_input)
97
+
98
+ # Use Groq to generate a response
99
+ response = @client.chat(messages)
100
+
101
+ message = response.dig("content")
102
+ puts "#{agent_emoji} #{message}"
103
+
104
+ messages << response
105
+ end
@@ -0,0 +1,20 @@
1
+ require "rails/generators/base"
2
+
3
+ module Groq
4
+ module Generators
5
+ class InstallGenerator < Rails::Generators::Base
6
+ source_root File.expand_path("templates", __dir__)
7
+
8
+ def create_groq_init_file
9
+ create_file "config/initializers/groq.rb", <<~RUBY
10
+ # frozen_string_literal: true
11
+
12
+ Groq.configure do |config|
13
+ config.api_key = ENV["GROQ_API_KEY"]
14
+ config.model_id = "llama3-70b-8192"
15
+ end
16
+ RUBY
17
+ end
18
+ end
19
+ end
20
+ end
data/lib/groq/client.rb CHANGED
@@ -7,6 +7,7 @@ class Groq::Client
7
7
  model_id
8
8
  max_tokens
9
9
  temperature
10
+ request_timeout
10
11
  ].freeze
11
12
  attr_reader(*CONFIG_KEYS, :faraday_middleware)
12
13
 
@@ -21,8 +22,7 @@ class Groq::Client
21
22
  @faraday_middleware = faraday_middleware
22
23
  end
23
24
 
24
- # TODO: support stream: true; or &stream block
25
- def chat(messages, model_id: nil, tools: nil, max_tokens: nil, temperature: nil, json: false)
25
+ def chat(messages, model_id: nil, tools: nil, tool_choice: nil, max_tokens: nil, temperature: nil, json: false, stream: nil, &stream_chunk)
26
26
  unless messages.is_a?(Array) || messages.is_a?(String)
27
27
  raise ArgumentError, "require messages to be an Array or String"
28
28
  end
@@ -33,45 +33,128 @@ class Groq::Client
33
33
 
34
34
  model_id ||= @model_id
35
35
 
36
+ if stream_chunk ||= stream
37
+ require "event_stream_parser"
38
+ end
39
+
36
40
  body = {
37
41
  model: model_id,
38
42
  messages: messages,
39
43
  tools: tools,
44
+ tool_choice: tool_choice,
40
45
  max_tokens: max_tokens || @max_tokens,
41
46
  temperature: temperature || @temperature,
42
- response_format: json ? {type: "json_object"} : nil
47
+ response_format: json ? {type: "json_object"} : nil,
48
+ stream_chunk: stream_chunk
43
49
  }.compact
44
50
  response = post(path: "/openai/v1/chat/completions", body: body)
45
- if response.status == 200
46
- response.body.dig("choices", 0, "message")
47
- else
48
- # TODO: send the response.body back in Error object
49
- puts "Error: #{response.status}"
50
- pp response.body
51
- raise Error, "Request failed with status #{response.status}: #{response.body}"
51
+ # Configured to raise exceptions on 4xx/5xx responses
52
+ if response.body.is_a?(Hash)
53
+ return response.body.dig("choices", 0, "message")
52
54
  end
55
+ response.body
53
56
  end
54
57
 
55
58
  def get(path:)
56
- client.get do |req|
57
- req.url path
58
- req.headers["Authorization"] = "Bearer #{@api_key}"
59
+ client.get(path) do |req|
60
+ req.headers = headers
59
61
  end
60
62
  end
61
63
 
62
64
  def post(path:, body:)
63
- client.post do |req|
64
- req.url path
65
- req.headers["Authorization"] = "Bearer #{@api_key}"
66
- req.body = body
65
+ client.post(path) do |req|
66
+ configure_json_post_request(req, body)
67
67
  end
68
68
  end
69
69
 
70
70
  def client
71
- @client ||= Faraday.new(url: @api_url) do |f|
72
- f.request :json # automatically encode the request body as JSON
73
- f.response :json # automatically decode JSON responses
74
- f.adapter Faraday.default_adapter
71
+ @client ||= begin
72
+ connection = Faraday.new(url: @api_url) do |f|
73
+ f.request :json # automatically encode the request body as JSON
74
+ f.response :json # automatically decode JSON responses
75
+ f.response :raise_error # raise exceptions on 4xx/5xx responses
76
+
77
+ f.adapter Faraday.default_adapter
78
+ f.options[:timeout] = request_timeout
79
+ end
80
+ @faraday_middleware&.call(connection)
81
+
82
+ connection
83
+ end
84
+ end
85
+
86
+ private
87
+
88
+ def headers
89
+ {
90
+ "Authorization" => "Bearer #{@api_key}",
91
+ "User-Agent" => "groq-ruby/#{Groq::VERSION}"
92
+ }
93
+ end
94
+
95
+ #
96
+ # Code/ideas borrowed from lib/openai/http.rb in https://github.com/alexrudall/ruby-openai/
97
+ #
98
+
99
+ def configure_json_post_request(req, body)
100
+ req_body = body.dup
101
+
102
+ if body[:stream_chunk].respond_to?(:call)
103
+ req.options.on_data = to_json_stream(user_proc: body[:stream_chunk])
104
+ req_body[:stream] = true # Tell Groq to stream
105
+ req_body.delete(:stream_chunk)
106
+ elsif body[:stream_chunk]
107
+ raise ArgumentError, "The stream_chunk parameter must be a Proc or have a #call method"
108
+ end
109
+
110
+ req.headers = headers
111
+ req.body = req_body
112
+ end
113
+
114
+ # Given a proc, returns an outer proc that can be used to iterate over a JSON stream of chunks.
115
+ # For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
116
+ # be a data object or an error object as described in the OpenAI API documentation.
117
+ #
118
+ # @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
119
+ # @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
120
+ def to_json_stream(user_proc:)
121
+ parser = EventStreamParser::Parser.new
122
+
123
+ proc do |chunk, _bytes, env|
124
+ if env && env.status != 200
125
+ raise_error = Faraday::Response::RaiseError.new
126
+ raise_error.on_complete(env.merge(body: try_parse_json(chunk)))
127
+ end
128
+
129
+ parser.feed(chunk) do |_type, data|
130
+ next if data == "[DONE]"
131
+ chunk = JSON.parse(data)
132
+ delta = chunk.dig("choices", 0, "delta")
133
+ content = delta.dig("content")
134
+ if user_proc.is_a?(Proc)
135
+ # if user_proc takes one argument, pass the content
136
+ if user_proc.arity == 1
137
+ user_proc.call(content)
138
+ else
139
+ user_proc.call(content, chunk)
140
+ end
141
+ elsif user_proc.respond_to?(:call)
142
+ # if call method takes one argument, pass the content
143
+ if user_proc.method(:call).arity == 1
144
+ user_proc.call(content)
145
+ else
146
+ user_proc.call(content, chunk)
147
+ end
148
+ else
149
+ raise ArgumentError, "The stream_chunk parameter must be a Proc or have a #call method"
150
+ end
151
+ end
75
152
  end
76
153
  end
154
+
155
+ def try_parse_json(maybe_json)
156
+ JSON.parse(maybe_json)
157
+ rescue JSON::ParserError
158
+ maybe_json
159
+ end
77
160
  end
data/lib/groq/helpers.rb CHANGED
@@ -13,7 +13,10 @@ module Groq::Helpers
13
13
  end
14
14
  alias_method :Assistant, :A
15
15
 
16
- def S(content)
16
+ def S(content, json_schema: nil)
17
+ if json_schema
18
+ content += "\nJSON must use schema: #{json_schema}"
19
+ end
17
20
  {role: "system", content: content}
18
21
  end
19
22
  alias_method :System, :S
data/lib/groq/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Groq
4
- VERSION = "0.2.0"
4
+ VERSION = "0.3.1"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: groq
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.3.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dr Nic Williams
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-04-20 00:00:00.000000000 Z
11
+ date: 2024-05-05 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: faraday
@@ -52,6 +52,20 @@ dependencies:
52
52
  - - ">"
53
53
  - !ruby/object:Gem::Version
54
54
  version: '5'
55
+ - !ruby/object:Gem::Dependency
56
+ name: event_stream_parser
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - "~>"
60
+ - !ruby/object:Gem::Version
61
+ version: '1.0'
62
+ type: :runtime
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - "~>"
67
+ - !ruby/object:Gem::Version
68
+ version: '1.0'
55
69
  - !ruby/object:Gem::Dependency
56
70
  name: vcr
57
71
  requirement: !ruby/object:Gem::Requirement
@@ -80,6 +94,20 @@ dependencies:
80
94
  - - "~>"
81
95
  - !ruby/object:Gem::Version
82
96
  version: '3.0'
97
+ - !ruby/object:Gem::Dependency
98
+ name: dry-schema
99
+ requirement: !ruby/object:Gem::Requirement
100
+ requirements:
101
+ - - "~>"
102
+ - !ruby/object:Gem::Version
103
+ version: '1.13'
104
+ type: :development
105
+ prerelease: false
106
+ version_requirements: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - "~>"
109
+ - !ruby/object:Gem::Version
110
+ version: '1.13'
83
111
  description: Client library for Groq API for fast LLM inference.
84
112
  email:
85
113
  - drnicwilliams@gmail.com
@@ -94,6 +122,15 @@ files:
94
122
  - README.md
95
123
  - Rakefile
96
124
  - docs/images/groq-speed-price-20240421.png
125
+ - examples/README.md
126
+ - examples/agent-prompts/food-customer.yml
127
+ - examples/agent-prompts/helloworld.yml
128
+ - examples/agent-prompts/pizzeria-sales.yml
129
+ - examples/groq-two-agents-chatting.rb
130
+ - examples/streaming-to-json-objects.rb
131
+ - examples/user-chat-streaming.rb
132
+ - examples/user-chat.rb
133
+ - lib/generators/groq/install_generator.rb
97
134
  - lib/groq-ruby.rb
98
135
  - lib/groq.rb
99
136
  - lib/groq/client.rb