Skip to content

Usability issues #30

@jlhood

Description

@jlhood

Downloaded this for the first time today and gave it a try. Love the concept and am already getting good value out of the stateful schema checks! Creating this issue to discuss some usability issues I encountered while using the tool for the first time.

  1. file:// values for parameters only work with absolute paths. It would be nice if they also worked with relative paths.
  2. --format should be the default behavior (human-readable) and rather than printing the raw python objects, which are difficult to read or pass to other tools, you should add a --json flag and output the result as JSON for machine-readable output.
  3. For stateful checks, it's useful that the deep diff output is displayed (helps to understand failed checks), but it's very difficult to read/parse right now. First, it's printed as a raw python dict (I converted it to JSON at least in fix: Print diff as JSON string #28 as a quick fix), but I still have to pass it to some external tool like jq to pretty-print it. Should play around with the experience here. Some ideas are to pretty-print it to stderr so it's easy to separate from the other output or have the diff output off by default and add a --verbose flag to include it (although then discoverability becomes an issue) or you can keep the compact JSON output and add a --pretty flag to pretty-print the JSON.
  4. When running stateful checks against 2 identical schemas, I get output showing all rules under [SKIPPED]: and no rules under [PASSED], [WARNING] or [FAILED]. This was confusing to me because I thought it meant the checks were not being performed because they were disabled or something, but what it really meant was since the deep diff was an empty object, none of the rules applied at all. My intuitive expectation was to see all checks under the [PASSED] section since none of the stateful checks were violated. I get the desire to distinguish between the check actually ran and passed vs the check was not applicable, so maybe it's just a wording thing and you should replace the word "skipped" with "not applicable" maybe?
  5. While the error messages are helpful, it's pretty difficult to know what caused the check to fail. There's no line number information or anything to give me a clue as to what specifically caused the check to fail. I have to essentially figure it out from the message and examining the diff output.

Hope this feedback is helpful! Thanks again for this great tool!

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationenhancementNew feature or requestgood first issueGood for newcomers

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions