fh-pydantic-form 0.2.5__py3-none-any.whl → 0.3.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of fh-pydantic-form might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: fh-pydantic-form
3
- Version: 0.2.5
3
+ Version: 0.3.0
4
4
  Summary: a library to turn any pydantic BaseModel object into a fasthtml/monsterui input form
5
5
  Project-URL: Homepage, https://github.com/Marcura/fh-pydantic-form
6
6
  Project-URL: Repository, https://github.com/Marcura/fh-pydantic-form
@@ -55,10 +55,12 @@ Description-Content-Type: text/markdown
55
55
  10. [Disabling & Excluding Fields](#disabling--excluding-fields)
56
56
  11. [Refreshing & Resetting](#refreshing--resetting)
57
57
  12. [Label Colors](#label-colors)
58
- 13. [Schema Drift Resilience](#schema-drift-resilience)
59
- 14. [Custom Renderers](#custom-renderers)
60
- 15. [API Reference](#api-reference)
61
- 16. [Contributing](#contributing)
58
+ 13. [Metrics & Highlighting](#metrics--highlighting)
59
+ 14. [ComparisonForm](#comparisonform)
60
+ 15. [Schema Drift Resilience](#schema-drift-resilience)
61
+ 16. [Custom Renderers](#custom-renderers)
62
+ 17. [API Reference](#api-reference)
63
+ 18. [Contributing](#contributing)
62
64
 
63
65
  ## Purpose
64
66
 
@@ -522,6 +524,333 @@ form_renderer = PydanticForm(
522
524
 
523
525
  This can be useful for e.g. highlighting the values of different fields in a pdf with different highlighting colors matching the form input label color.
524
526
 
527
+ ## Metrics & Highlighting
528
+
529
+ `fh-pydantic-form` provides a powerful metrics system for visual highlighting of form fields based on extraction quality scores and confidence assessments. This is particularly useful for evaluating LLM structured output extraction, comparing generated data against ground truth, and building quality assessment interfaces.
530
+
531
+ <img width="796" alt="image" src="https://github.com/user-attachments/assets/df2f8623-991d-45b1-80d8-5c7239187a74" />
532
+
533
+
534
+ ### Basic Metrics Usage
535
+
536
+ ```python
537
+ from fh_pydantic_form import PydanticForm
538
+
539
+ # Define metrics for your form fields
540
+ metrics_dict = {
541
+ "title": {
542
+ "metric": 0.95,
543
+ "comment": "Excellent title quality - clear and engaging"
544
+ },
545
+ "rating": {
546
+ "metric": 0.3,
547
+ "comment": "Low rating needs attention"
548
+ },
549
+ "status": {
550
+ "metric": 0.0,
551
+ "comment": "Critical status issue - requires immediate review"
552
+ }
553
+ }
554
+
555
+ # Create form with metrics
556
+ form_renderer = PydanticForm(
557
+ "my_form",
558
+ MyModel,
559
+ metrics_dict=metrics_dict
560
+ )
561
+ ```
562
+
563
+ ### Metrics Dictionary Structure
564
+
565
+ Each field can have the following metrics properties:
566
+
567
+ | Property | Type | Description |
568
+ |----------|------|-------------|
569
+ | `metric` | `float` or `str` | Numeric score (0.0-1.0) or string assessment |
570
+ | `color` | `str` | Custom color (overrides automatic color-coding) |
571
+ | `comment` | `str` | Tooltip text shown on hover |
572
+
573
+ ### Automatic Color Coding
574
+
575
+ Numeric metrics are automatically color-coded:
576
+
577
+ - **1.0**: Bright green (perfect score)
578
+ - **0.5-1.0**: Medium green (good range)
579
+ - **0.0-0.5**: Dark red (needs attention)
580
+ - **0.0**: Bright red (critical issue)
581
+
582
+ ```python
583
+ metrics_dict = {
584
+ "field1": {"metric": 1.0, "comment": "Perfect!"}, # Bright green
585
+ "field2": {"metric": 0.8, "comment": "Very good"}, # Medium green
586
+ "field3": {"metric": 0.3, "comment": "Needs work"}, # Dark red
587
+ "field4": {"metric": 0.0, "comment": "Critical"}, # Bright red
588
+ }
589
+ ```
590
+
591
+ ### Custom Colors
592
+
593
+ Override automatic colors with custom values:
594
+
595
+ ```python
596
+ metrics_dict = {
597
+ "status": {
598
+ "metric": 0.0,
599
+ "color": "purple", # Custom color overrides red
600
+ "comment": "Status requires special attention"
601
+ },
602
+ "priority": {
603
+ "metric": 1.0,
604
+ "color": "#FF6B35", # Custom hex color
605
+ "comment": "High priority with custom highlight"
606
+ }
607
+ }
608
+ ```
609
+
610
+ ### String Metrics
611
+
612
+ Use string values for qualitative assessments:
613
+
614
+ ```python
615
+ metrics_dict = {
616
+ "validation_status": {
617
+ "metric": "NEEDS_REVIEW",
618
+ "color": "#F59E0B", # Amber color
619
+ "comment": "Requires human review"
620
+ },
621
+ "data_quality": {
622
+ "metric": "EXCELLENT",
623
+ "color": "#10B981", # Green color
624
+ "comment": "Data quality exceeds standards"
625
+ }
626
+ }
627
+ ```
628
+
629
+ ### Nested Field Metrics
630
+
631
+ Support for nested objects and list items:
632
+
633
+ ```python
634
+ metrics_dict = {
635
+ # Nested object fields
636
+ "author.name": {
637
+ "metric": 0.95,
638
+ "comment": "Author name perfectly formatted"
639
+ },
640
+ "author.email": {
641
+ "metric": 0.9,
642
+ "comment": "Email format excellent"
643
+ },
644
+
645
+ # List item metrics
646
+ "tags[0]": {
647
+ "metric": 1.0,
648
+ "comment": "First tag is perfect"
649
+ },
650
+ "tags[1]": {
651
+ "metric": 0.8,
652
+ "comment": "Second tag very good"
653
+ },
654
+
655
+ # Complex nested paths
656
+ "author.addresses[0].street": {
657
+ "metric": 1.0,
658
+ "comment": "Street address perfectly formatted"
659
+ },
660
+ "author.addresses[1].city": {
661
+ "metric": 0.1,
662
+ "color": "teal",
663
+ "comment": "City has verification problems"
664
+ }
665
+ }
666
+ ```
667
+
668
+ ### Practical Use Cases
669
+
670
+ **LLM Structured Output Evaluation:**
671
+ ```python
672
+ # Evaluate LLM extraction quality against ground truth
673
+ extraction_metrics = {
674
+ "product.name": {
675
+ "metric": 0.9,
676
+ "comment": "Name extracted with minor formatting issue: missing space"
677
+ },
678
+ "product.category": {
679
+ "metric": 0.0,
680
+ "comment": "Critical error: LLM misclassified Electronics instead of Sports"
681
+ },
682
+ "key_features": {
683
+ "metric": 0.6,
684
+ "comment": "LLM missed 2 of 5 key features from source text"
685
+ },
686
+ "extraction_confidence": {
687
+ "metric": 1.0,
688
+ "comment": "LLM confidence score accurately reflects actual performance"
689
+ }
690
+ }
691
+ ```
692
+
693
+ **Document Processing Quality:**
694
+ ```python
695
+ # Highlight extraction quality from documents
696
+ doc_extraction_metrics = {
697
+ "invoice_number": {
698
+ "metric": 1.0,
699
+ "comment": "Invoice number perfectly extracted from PDF"
700
+ },
701
+ "line_items": {
702
+ "metric": 0.75,
703
+ "comment": "3/4 line items extracted correctly"
704
+ },
705
+ "total_amount": {
706
+ "metric": 0.0,
707
+ "comment": "Amount extraction failed - currency symbol confusion"
708
+ }
709
+ }
710
+ ```
711
+
712
+ See `examples/metrics_example.py` for a comprehensive demonstration of all metrics features.
713
+
714
+ ## ComparisonForm
715
+
716
+ The `ComparisonForm` component provides side-by-side comparison of two related forms, perfect for evaluating LLM structured output against ground truth, annotation correction workflows, and comparing extraction results.
717
+
718
+
719
+ <img width="1177" alt="image" src="https://github.com/user-attachments/assets/75020059-0d4d-4519-9c71-70a082d3242e" />
720
+
721
+ ### Basic Usage
722
+
723
+ ```python
724
+ from fh_pydantic_form import PydanticForm, ComparisonForm
725
+
726
+ # Create two forms to compare
727
+ left_form = PydanticForm(
728
+ "ground_truth",
729
+ ProductModel,
730
+ initial_values=annotated_ground_truth,
731
+ disabled=False # Editable for annotation correction
732
+ )
733
+
734
+ right_form = PydanticForm(
735
+ "llm_output",
736
+ ProductModel,
737
+ initial_values=llm_extracted_data,
738
+ disabled=True, # Read-only LLM output
739
+ metrics_dict=extraction_quality_metrics
740
+ )
741
+
742
+ # Create comparison form
743
+ comparison_form = ComparisonForm(
744
+ name="extraction_evaluation",
745
+ left_form=left_form,
746
+ right_form=right_form,
747
+ left_label="📝 Ground Truth (Editable)",
748
+ right_label="🤖 LLM Output (with Quality Scores)"
749
+ )
750
+ ```
751
+
752
+ ### Required JavaScript
753
+
754
+ Include the comparison form JavaScript in your app headers:
755
+
756
+ ```python
757
+ from fh_pydantic_form import comparison_form_js
758
+
759
+ app, rt = fh.fast_app(
760
+ hdrs=[
761
+ mui.Theme.blue.headers(),
762
+ comparison_form_js(), # Required for comparison forms
763
+ ],
764
+ pico=False,
765
+ live=True,
766
+ )
767
+ ```
768
+
769
+ ### Complete Example
770
+
771
+ ```python
772
+ @rt("/")
773
+ def get():
774
+ return fh.Div(
775
+ mui.Container(
776
+ mui.Card(
777
+ mui.CardHeader(
778
+ fh.H1("LLM Extraction Evaluation")
779
+ ),
780
+ mui.CardBody(
781
+ # Render the comparison form
782
+ comparison_form.form_wrapper(
783
+ fh.Div(
784
+ comparison_form.render_inputs(),
785
+
786
+ # Action buttons
787
+ fh.Div(
788
+ mui.Button(
789
+ "Update Ground Truth",
790
+ type="submit",
791
+ hx_post="/update_ground_truth",
792
+ hx_target="#result"
793
+ ),
794
+ comparison_form.left_reset_button("Reset Left"),
795
+ comparison_form.left_refresh_button("Refresh Left"),
796
+ cls="mt-4 flex gap-2"
797
+ ),
798
+
799
+ fh.Div(id="result", cls="mt-4")
800
+ )
801
+ )
802
+ )
803
+ )
804
+ )
805
+ )
806
+
807
+ @rt("/update_ground_truth")
808
+ async def post_update_ground_truth(req):
809
+ # Validate left form (ground truth side)
810
+ validated = await comparison_form.left_form.model_validate_request(req)
811
+
812
+ # Process the ground truth update
813
+ return success_response(validated)
814
+
815
+ # Register routes for both forms
816
+ comparison_form.register_routes(app)
817
+ ```
818
+
819
+ ### Key Features
820
+ - **Aligned fields** input fields are horizontally aligned for easy comparison.
821
+ - **Synchronized Accordions**: Expanding/collapsing sections syncs between both forms
822
+ - **Independent Controls**: Separate refresh and reset buttons for each side
823
+ - **Metrics Integration**: Right side typically shows LLM output quality scores
824
+ - **Flexible Layout**: Responsive design works on desktop and mobile
825
+ - **Form Validation**: Standard validation works with either form
826
+
827
+ ### Common Patterns
828
+
829
+ **LLM Output Evaluation:**
830
+ ```python
831
+ # Left: Editable ground truth
832
+ # Right: Read-only LLM output with extraction quality metrics
833
+ truth_form = PydanticForm(..., disabled=False, metrics_dict={})
834
+ llm_form = PydanticForm(..., disabled=True, metrics_dict=extraction_metrics)
835
+ ```
836
+
837
+ **Document Extraction Comparison:**
838
+ ```python
839
+ # Left: Manual annotation
840
+ # Right: Automated LLM extraction
841
+ manual_form = PydanticForm(..., initial_values=manual_annotation)
842
+ auto_form = PydanticForm(..., initial_values=llm_extraction, metrics_dict=quality_scores)
843
+ ```
844
+
845
+ **Annotation Correction Workflow:**
846
+ ```python
847
+ # Left: Correctable ground truth
848
+ # Right: LLM output with confidence scores
849
+ ground_truth_form = PydanticForm(..., disabled=False)
850
+ llm_output_form = PydanticForm(..., disabled=True, metrics_dict=confidence_scores)
851
+ ```
852
+
853
+ See `examples/comparison_example.py` for a complete LLM extraction evaluation interface demonstration.
525
854
 
526
855
  ## Setting Initial Values
527
856
 
@@ -650,8 +979,19 @@ form_renderer = PydanticForm(
650
979
  | `label_colors` | `Optional[Dict[str, str]]` | `None` | Mapping of field names to CSS colors or Tailwind classes |
651
980
  | `exclude_fields` | `Optional[List[str]]` | `None` | List of field names to exclude from rendering (auto-injected on submission) |
652
981
  | `spacing` | `SpacingValue` | `"normal"` | Spacing theme: `"normal"`, `"compact"`, or `SpacingTheme` enum |
982
+ | `metrics_dict` | `Optional[Dict[str, Dict]]` | `None` | Field metrics for highlighting and tooltips |
653
983
 
654
- ### Key Methods
984
+ ### ComparisonForm Constructor
985
+
986
+ | Parameter | Type | Default | Description |
987
+ |-----------|------|---------|-------------|
988
+ | `name` | `str` | Required | Unique identifier for the comparison form |
989
+ | `left_form` | `PydanticForm` | Required | Form to display on the left side |
990
+ | `right_form` | `PydanticForm` | Required | Form to display on the right side |
991
+ | `left_label` | `str` | `"Left"` | Label for the left form |
992
+ | `right_label` | `str` | `"Right"` | Label for the right form |
993
+
994
+ ### PydanticForm Methods
655
995
 
656
996
  | Method | Purpose |
657
997
  |--------|---------|
@@ -662,11 +1002,24 @@ form_renderer = PydanticForm(
662
1002
  | `parse(form_dict)` | Parse raw form data into model-compatible dictionary |
663
1003
  | `model_validate_request(req)` | Extract, parse, and validate form data from request |
664
1004
 
1005
+ ### ComparisonForm Methods
1006
+
1007
+ | Method | Purpose |
1008
+ |--------|---------|
1009
+ | `render_inputs()` | Generate side-by-side form inputs |
1010
+ | `form_wrapper(content)` | Wrap content with comparison form structure |
1011
+ | `left_reset_button(text=None, **kwargs)` | Reset button for left form |
1012
+ | `right_reset_button(text=None, **kwargs)` | Reset button for right form |
1013
+ | `left_refresh_button(text=None, **kwargs)` | Refresh button for left form |
1014
+ | `right_refresh_button(text=None, **kwargs)` | Refresh button for right form |
1015
+ | `register_routes(app)` | Register HTMX endpoints for both forms |
1016
+
665
1017
  ### Utility Functions
666
1018
 
667
1019
  | Function | Purpose |
668
1020
  |----------|---------|
669
1021
  | `list_manipulation_js()` | JavaScript for list reordering and toggle functionality |
1022
+ | `comparison_form_js()` | JavaScript for comparison form accordion synchronization |
670
1023
  | `default_dict_for_model(model_class)` | Generate default values for all fields in a model |
671
1024
  | `default_for_annotation(annotation)` | Get sensible default for a type annotation |
672
1025
 
@@ -0,0 +1,17 @@
1
+ fh_pydantic_form/__init__.py,sha256=uNDN6UXIM25U7NazFi0Y9ivAeA8plERrRBk7TOd6P6M,4313
2
+ fh_pydantic_form/color_utils.py,sha256=M0HSXX0i-lSHkcsgesxw7d3PEAnLsZ46i_STymZAM_k,18271
3
+ fh_pydantic_form/comparison_form.py,sha256=TwlOCVmaY535zVzz01ZNSTpk5RkufQZEjAJuURwh5gQ,20986
4
+ fh_pydantic_form/constants.py,sha256=-N9wzkibFNn-V6cO8iWTQ7_xBvwSr2hBdq-m3apmW4M,169
5
+ fh_pydantic_form/defaults.py,sha256=Pwv46v7e43cykx4Pt01e4nw-6FBkHmPvTZK36ZTZqgA,6068
6
+ fh_pydantic_form/field_renderers.py,sha256=iCCt8hKjfJedPwad5ASUzI6KpOhNW_4oxZRSShCoRtc,77581
7
+ fh_pydantic_form/form_parser.py,sha256=mAsE5pE7A27k7zgHg4UD-P3HHQj9FmZneK_z68jLHbo,26117
8
+ fh_pydantic_form/form_renderer.py,sha256=IHuO8cxbzuv0y5htiLp1csQ30ZRAnVP4v3qIMOvaZBM,36898
9
+ fh_pydantic_form/list_path.py,sha256=AA8bmDmaYy4rlGIvQOOZ0fP2tgcimNUB2Re5aVGnYc8,5182
10
+ fh_pydantic_form/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
11
+ fh_pydantic_form/registry.py,sha256=sufK-85ST3rc3Vu0XmjjjdTqTAqgHr_ZbMGU0xRgTK8,4996
12
+ fh_pydantic_form/type_helpers.py,sha256=JUzHT8YrWj2_g7f_Wr2GL9i3BgP1zZftFrrO8xDPeis,7409
13
+ fh_pydantic_form/ui_style.py,sha256=UPK5OBwUVVTLnfvQ-yKukz2vbKZaT_GauaNB7OGc-Uw,3848
14
+ fh_pydantic_form-0.3.0.dist-info/METADATA,sha256=Ze11LbQeBJu43_sG3_8009PSRNwamWdeohk7bxe5EBY,37067
15
+ fh_pydantic_form-0.3.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
16
+ fh_pydantic_form-0.3.0.dist-info/licenses/LICENSE,sha256=AOi2eNK3D2aDycRHfPRiuACZ7WPBsKHTV2tTYNl7cls,577
17
+ fh_pydantic_form-0.3.0.dist-info/RECORD,,
@@ -1,15 +0,0 @@
1
- fh_pydantic_form/__init__.py,sha256=luxohu6NgZDC0nhSIyw5lJGP2A8JQ51Ge1Ga7DYDkF8,4048
2
- fh_pydantic_form/constants.py,sha256=-N9wzkibFNn-V6cO8iWTQ7_xBvwSr2hBdq-m3apmW4M,169
3
- fh_pydantic_form/defaults.py,sha256=Pwv46v7e43cykx4Pt01e4nw-6FBkHmPvTZK36ZTZqgA,6068
4
- fh_pydantic_form/field_renderers.py,sha256=VYvAmLsLhQttlg97g2KGg-VNlS4ohxrPN1O906EJM6I,54984
5
- fh_pydantic_form/form_parser.py,sha256=mAsE5pE7A27k7zgHg4UD-P3HHQj9FmZneK_z68jLHbo,26117
6
- fh_pydantic_form/form_renderer.py,sha256=cPd7NbaPOZC8cTvhEZOsy8sf5fH6FomrsR_r6KAFF54,34573
7
- fh_pydantic_form/list_path.py,sha256=AA8bmDmaYy4rlGIvQOOZ0fP2tgcimNUB2Re5aVGnYc8,5182
8
- fh_pydantic_form/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
9
- fh_pydantic_form/registry.py,sha256=sufK-85ST3rc3Vu0XmjjjdTqTAqgHr_ZbMGU0xRgTK8,4996
10
- fh_pydantic_form/type_helpers.py,sha256=FH4yl5FW1KNKvfHzs8TKQinFTC-MUgqDvRTVfPHs1LM,6815
11
- fh_pydantic_form/ui_style.py,sha256=aIWDWbPBUAQ73nPC5AHZi5cnqA0SIp9ISWwsxFdXXdE,3776
12
- fh_pydantic_form-0.2.5.dist-info/METADATA,sha256=0xo0GN6Vj50q9Qb4vQufOaqAs8Fn1w3a3h58elpBbqA,26356
13
- fh_pydantic_form-0.2.5.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
14
- fh_pydantic_form-0.2.5.dist-info/licenses/LICENSE,sha256=AOi2eNK3D2aDycRHfPRiuACZ7WPBsKHTV2tTYNl7cls,577
15
- fh_pydantic_form-0.2.5.dist-info/RECORD,,