docs: add manual test case documentation for all MDL command areas#386
docs: add manual test case documentation for all MDL command areas#386retran wants to merge 1 commit intomendixlabs:mainfrom
Conversation
AI Code ReviewWhat Looks Good
RecommendationApprove the PR. The documentation is valuable for manual testing and QA processes, adheres to contribution guidelines, and introduces no risks since it contains no code changes. Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
c361ed9 to
1b98697
Compare
AI Code ReviewWhat Looks GoodThis PR adds comprehensive manual test case documentation for all MDL command areas, which is a valuable contribution for quality assurance and testing. The documentation is well-organized by feature area (agents, business events, catalog, CLI commands, etc.) and includes:
The test cases appear to align with features marked as "Implemented" in CLAUDE.md (e.g., AI agent documents, business event services), and the documentation correctly references module requirements (like AgentEditorCommons for agents) and version constraints. The structure follows the project's documentation conventions, placing test documentation in the appropriate RecommendationApprove this PR. The documentation adds significant value for manual testing and QA processes without introducing any code changes or violating project guidelines. It properly covers implemented features and provides actionable test cases for both AI agents and human testers. No changes are needed. Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
bbbab80 to
1625d02
Compare
AI Code ReviewReview SummaryThis PR adds comprehensive manual test case documentation for all MDL command areas. It consists entirely of new documentation files (no code changes) covering:
The documentation includes prerequisites, execution methods, verification patterns, and detailed test case templates. Checklist AnalysisOverlap & duplication:
MDL syntax design:
Full-stack consistency:
Test coverage:
Security & robustness:
Scope & atomicity:
Code quality:
Bugs & correctness:
Minor Issues
What Looks Good
RecommendationApprove This documentation will significantly improve the quality and consistency of manual testing for the mxcli tool. While it would have been ideal to reference an associated issue per contribution guidelines, the value of this comprehensive testing documentation outweighs this minor process concern, especially since it supports existing implementation work (referencing PR #301). The documentation is accurate, well-structured, and directly addresses a need for standardized manual testing procedures. Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
26b9a38 to
5448cb5
Compare
AI Code ReviewOkay, I need to review this pull request that adds manual test case documentation for all MDL command areas. The PR includes several new files in the docs/15-testing/ directory, each containing test cases for different MDL features. First, let me understand what the PR is about. It's adding comprehensive manual test case documentation covering all MDL command areas. There are 18 test documents with 663 test cases total. The PR shows new files like AGENT-TESTING.md, agent-editor-test-cases.md, business-event-test-cases.md, catalog-test-cases.md, and others. Now, I need to review this against the checklist provided in CLAUDE.md. Let me go through the checklist items one by one. Overlap & duplication
MDL syntax design (for new or modified MDL syntax)The PR is adding test cases, not modifying MDL syntax itself. The test cases include examples of MDL statements, but they're not introducing new syntax. Let me check if any of the test cases show non-compliant syntax. Looking at the agent-editor-test-cases.md file, I see examples like: This follows the property format (Key: value) with colon separators. The AS keyword isn't used here, but I see in other parts like: This seems to be using a block syntax for tools, which might be specific to agent definitions. I need to check if this follows MDL design guidelines. According to the guidelines: "Property lists use ( Key: value, ... ) format with colon separators, one per line" and "Colon for property definitions (Key: value), AS for name-to-name mappings". In the agent examples, I see:
This is a nested block syntax. Is this allowed in MDL? The guidelines mention "Property lists use ( Key: value, ... ) format", but this seems to be using a different syntax for complex properties. Looking at the business-event-test-cases.md, I see: Here, I see:
This might be problematic. Let me check the guidelines again. The guidelines say: "Property lists use ( Key: value, ... ) format with colon separators, one per line" In the business event example:
This could be a violation of the MDL syntax design guidelines. Also, in the agent examples, I see: This uses colon, which is good for property definition. But then: This is a block, not a property list in parentheses. Let me check if there are any AS usages that might be incorrectly flagged. In the agent examples, I don't see AS being used for name mapping. I see things like: which is a property definition (Key: value), not a name mapping. The guidelines say: "do not flag AS in mapping contexts as a syntax violation" - so if AS is used for renaming (like old AS new), it's okay, but if used elsewhere, it's not. I don't see AS in the examples provided, so probably okay. But the main concern is whether the syntax shown in the test cases follows the MDL design guidelines. Let me look at the MDL_QUICK_REFERENCE.md mentioned in the guidelines to see what the standard patterns are. Since I don't have the actual file, I'll rely on the guidelines provided in CLAUDE.md. The guidelines state:
In the business event test cases: This doesn't follow the property list format. It should probably be something like: But I'm not sure if that's valid MDL. The test cases are showing what they believe is correct syntax, but it might not comply with the guidelines. Similarly, in the agent test cases: This part follows the property list format correctly with colons. But then: This is a block syntax, not a property list. Perhaps for complex properties like tools, MCP services, and knowledge bases, a different syntax is used. But the guidelines don't mention exceptions for block syntax. The guidelines say: "Property lists use ( Key: value, ... ) format" - implying that all property definitions should follow this format. Having a block like Let me check if there are similar patterns in existing MDL. Since I don't have access to the actual MDL implementation, I have to go by the guidelines. The guidelines also say: "Statement reads as English — a business analyst understands the statement on first reading" The agent example reads reasonably well: "create agent MyModule.ChatAgent with UsageType: chat, Description: 'A helpful chat agent', etc." But the tool block might be less English-like. Another guideline: "No keyword overloading — each keyword has one consistent meaning" I don't see obvious keyword overloading in the examples. "Diff-friendly — adding one property produces a one-line diff" In the property list format, adding a property would be one line. In the block format, adding a property to a tool block would also be one line inside the block. But the issue is whether the block syntax itself complies with the guidelines. Let me look at the business event example more carefully. In business-event-test-cases.md, section 3.1: Expected output: The trailing In the CREATE BUSINESS EVENT SERVICE examples (section 4), I see: Here it ends with a semicolon, not a slash. In the describe output, it has a slash at the end. This inconsistency might be a problem. Also, notice that in the describe output, there's no colon after event_name_prefix or after the message definition. Whereas in the agent create examples, they use colons for properties. This suggests that different MDL statement types might have different syntaxes, which could violate the guideline of consistent property format. The guidelines say: "consistent property format (Key: value) for property definitions" But in the business event examples, it's not using (Key: value) at all for the top-level properties; it's using Key value without colon or parentheses. Let me check if there are any examples that use the correct format. In the agent create examples, they do use: So for agents, it follows the property list format. For business events, it doesn't. This inconsistency would violate the "consistent property format" guideline. Moreover, the guideline says: "Property lists use ( Key: value, ... ) format with colon separators, one per line" In the business event case, it's not using parentheses at all for the property list. So this appears to be a syntax design issue in the test cases themselves, meaning the test cases are documenting non-compliant syntax. But wait, the PR is adding test cases, not implementing the syntax. So if the test cases show incorrect syntax, that's a problem because they're supposed to be testing the correct behavior. However, it's possible that the MDL syntax for business events actually does use this format, and the guidelines I'm reading are general but there are exceptions. Let me look for clarification in the CLAUDE.md. In the "MDL Syntax Quick Reference" section, it says: "Full syntax tables for all MDL statements [...] are in [docs/01-project/MDL_QUICK_REFERENCE.md]" Since I don't have that file, I have to infer. But the guidelines in CLAUDE.md under "Syntax design for MDL features" are pretty clear. Another approach: look at the existing doctype test files mentioned in the context. The context lists: 01-domain-model-examples.mdl, 02-microflow-examples.mdl, etc. If I could see one of those, I could check the syntax. But since I can't, I'll have to rely on the guidelines. Let me check if there are any mentions of specific syntax in the CLAUDE.md. In the "Key Concepts" section, I don't see specific MDL syntax examples. In the "Public API Pattern" and "High-Level Fluent API" sections, it's about Go code, not MDL. In the "Code Style Guidelines", it's about Go code. So I need to stick to the "Syntax design for MDL features" checklist item. The checklist says:
Now, looking at the business event example from the test case: Let's evaluate against the checklist:
In contrast, the agent example: This follows:
So the agent syntax seems compliant, but the business event syntax does not. This suggests that the test cases for business events are documenting non-compliant syntax, which would be a problem. However, it's possible that the business event syntax is correct and the guidelines allow for variations, but the guidelines seem pretty strict about the property format. Let me check if there are other examples in the PR. In the mapping-test-cases.md (not shown in the diff, but mentioned in the PR body), there might be examples. But since I don't have that file, I'll focus on what's visible. Another place: in the agent-editor-test-cases.md, for the tool blocks, it uses: This is not using the property list format for the tool definition itself. It's using a block. Whereas for the agent properties, it is using the property list format. So within the same statement, there are mixed syntaxes. This likely violates the "consistent property format" guideline. Now, for the PR review, since this is about test documentation, if the test cases are showing incorrect syntax, that's a moderate issue because it could mislead users or testers. But the PR is for adding test documentation, not for changing the syntax. So if the syntax is actually correct in the implementation, then the test cases are fine, but if the syntax is incorrect, then the test cases are wrong. Given that the PR has a note: "All expectations corrected to match actual mxcli behavior observed during testing", this suggests that the test cases were written based on what mxcli actually does, so if mxcli accepts this syntax, then it might be correct. But the guidelines are supposed to govern the syntax design, so if mxcli's syntax doesn't follow the guidelines, that's a problem with the implementation, not just the test cases. However, for the purpose of this PR review, I need to check if the PR as submitted follows the checklist. The checklist item for MDL syntax design is only relevant if the diff adds or modifies MDL syntax. This diff is adding test cases/documentation, not changing the syntax itself. So technically, this checklist item might not apply directly. But the checklist says: "If the diff adds or modifies MDL syntax (grammar rules, keywords, statement types), check:" Since this diff doesn't add or modify MDL syntax (it's just documentation), this section might not be applicable. However, the test cases include MDL syntax examples, so if those examples are incorrect, it could be an issue for the documentation. But the documentation is about how to test, not about the syntax specification. Let me see the PR description: "Comprehensive manual test case documentation covering all MDL command areas." So it's documentation for testing, not for the language specification. Therefore, Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
AI Code ReviewRecommendation: ApproveThis PR adds comprehensive manual test case documentation for all MDL command areas (19 documents, 663 test cases) without modifying any code or MDL syntax. It provides valuable testing resources for contributors and maintainers. Why this is acceptable:
What looks good:
No issues found:
This documentation will significantly improve testing effectiveness for both AI agents and human contributors. The PR adds value without risks. Approved - no changes requested. Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
5448cb5 to
0aefb76
Compare
AI Code ReviewReview SummaryThis PR adds comprehensive manual test case documentation for all MDL command areas (18 documents, 663 test cases). It's purely documentation with no code changes. What Looks Good
RecommendationApprove - This documentation significantly improves the project's testability and contributor experience. It follows the project's documentation standards and provides valuable manual testing guidance without overlapping with existing automated test coverage. Note: As a documentation-only PR, it doesn't trigger code-related checklist items (grammar, AST, executor, backend, LSP, etc.). The manual test cases appropriately complement existing automated tests rather than duplicating them. Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
0aefb76 to
5a7d92f
Compare
AI Code ReviewOkay, I need to review this pull request for the mxcli project. The PR is about adding manual test case documentation for all MDL command areas. Let me look at the details. The PR body says it's a comprehensive manual test case documentation covering all MDL command areas with 18 test documents and 663 test cases total. It includes a coverage table showing different areas like Entities & Associations, Enumerations, Microflows, etc., with the number of tests in each. The test execution results show a pass rate of 534/663 (80.5%) and 16 confirmed bugs filed. Now, I need to review this against the checklist provided in CLAUDE.md. The checklist has several sections: Overlap & duplication, MDL syntax design, Full-stack consistency, Test coverage, Security & robustness, Scope & atomicity, Code quality, and Bugs & correctness. Since this PR is about documentation (adding test case documents), I should focus on whether this documentation is appropriate and doesn't introduce any issues. First, checking for Overlap & duplication:
Next, MDL syntax design: This PR is about documentation, not adding or modifying MDL syntax. So this section might not apply directly. The checklist for MDL syntax design is for when the diff adds or modifies MDL syntax (grammar rules, keywords, statement types). Since this is documentation, there shouldn't be any MDL syntax changes in the diff. Full-stack consistency: Similarly, this is about documentation, not adding MDL features, so the full-stack consistency checks (grammar, AST, visitor, etc.) don't apply. Test coverage: The checklist says new packages should have test files, new executor commands should have MDL examples in mdl-examples/doctype-tests/. But this PR is about manual test documentation, not automated tests. The documentation itself is the test cases for manual execution. The checklist item about "New executor commands should have MDL examples in mdl-examples/doctype-tests/" might not be relevant here because this is documentation, not code. However, I should check if the documentation is accurate and follows the project's guidelines. The PR body mentions that "All expectations corrected to match actual mxcli behavior observed during testing," which suggests they've aligned the test expectations with current behavior. Security & robustness: Documentation shouldn't introduce security issues, but I should ensure that the test documentation doesn't suggest unsafe practices. For example, in the AGENT-TESTING.md, there are sections about working with test fixture copies, which correctly advise against running write operations on the original test project. That seems good. Scope & atomicity: The PR is scoped to adding test documentation, which is a single concern. It's not mixing feature work with documentation, so it should be fine. Code quality: Since it's documentation, code quality checks like refactors, manually maintained lists, etc., don't directly apply. But I should check if the documentation is well-structured and consistent. Looking at the diff, I see that the PR is adding several new markdown files in
The PR body lists 18 test documents with tests ranging from 7 to 89 per area, totaling 663 test cases. Now, I need to see if there's any issue with this documentation. One thing to check is whether the documentation correctly describes the MDL syntax and usage, especially since there's a note in the PR: "All expectations corrected to match actual mxcli behavior observed during testing." This implies that previously, the test expectations might have been incorrect, and they've updated them to match reality. Another point from the checklist: under "Documentation", it says:
This PR is adding test case documentation, which might fall under "MDL examples" or "Site docs", but specifically, it's in I should check if the project already has a place for such documentation. Looking at the project structure mentioned in CLAUDE.md, there's a In the CLAUDE.md, under "Project Overview", it lists the directory structure, and under The PR is adding to Now, for potential issues:
However, I recall that in the checklist, there's a point about "Verify the PR doesn't re-document already-shipped features as new". Since this is test documentation, not feature documentation, it might be okay. But if the test cases are for features that already exist, documenting them as new tests isn't re-documenting the features as new; it's adding test coverage. Another angle: the PR might be introducing documentation that duplicates existing test documentation. But from the context, it seems this is a new set of manual test cases. Looking at the diff for AGENT-TESTING.md, it starts with "# Agent Testing Instructions" and describes how to execute the test cases. This seems like a meta-documentation for running the tests, which is useful. Similarly, the other files are the actual test cases. Now, to apply the checklist strictly:
One specific thing to check: in the MDL syntax, the documentation should correctly use colons for property definitions and AS for name mappings. For example, in the agent-editor-test-cases.md, I see lines like: This uses colon for properties, which is correct per the guidelines (colon for property definitions). Also, for AS usage: in the same file, I see: Here, it's not using AS; it's defining properties. AS would be used for mappings like Looking through the diff, in business-event-test-cases.md, I see: Here, it's using colon for properties inside the message definition, which is correct. In agent-editor-test-cases.md, under "2.6 Dollar-quoted prompts", it shows: Which is correct for property definition. I don't see any instances where AS is used incorrectly in the test cases. AS would be used in statements like For example, in the CREATE AGENT statements, it's using So, the MDL syntax in the documentation appears to follow the guidelines. Another point: the documentation should use qualified names everywhere. In the examples, I see Now, for the "Full-stack consistency" part: since this is documentation, not code, it doesn't need to be wired through the full pipeline. But the documentation should accurately reflect how the features work, which it claims to do by correcting expectations to match actual behavior. Regarding test coverage: the PR is adding manual test documentation, which supports testing, but the checklist item about "New executor commands should have MDL examples in mdl-examples/doctype-tests/" isn't directly addressed because no new commands are added. However, the PR might be complementary to existing test coverage. But the checklist is for reviewing the PR, so if the PR doesn't add new executor commands, then that particular checklist item doesn't apply, and we don't flag it as missing. However, I should check if the documentation includes examples that could be used in One potential issue: the PR might be adding test documentation that refers to features not yet implemented, but the PR says "All expectations corrected to match actual mxcli behavior", so it should be accurate. Now, looking at the PR body, it mentions "Test Execution Results (v0.7.0-257-ga07e7e0)" with a pass rate of 534/663 (80.5%) and 16 confirmed bugs filed. This suggests that the test cases were run and bugs were found, which is good for improving quality. But for the review, I need to see if there's anything in the documentation that is incorrect or problematic. Let me think about specific sections. In AGENT-TESTING.md, under "Prerequisites", it lists required software, including Under "Test Session Workflow", it describes how to execute tests, which seems correct based on the project's usage. In the "Execution Methods" section, it details different ways to run tests, like using For example, in CLAUDE.md, it says: And in AGENT-TESTING.md, it has similar examples. Also, the REPL terminator using In AGENT-TESTING.md, under "REPL and Interactive Testing via tmux", it explains that the REPL uses So the documentation seems accurate. Now, is there any overlap with existing documentation? For instance, does the project already have testing instructions elsewhere? From the CLAUDE.md, I don't see a specific section for testing documentation, but there is a I should check if there are similar documents already. But since the diff shows these as new files (mode 100644, index 00000000..), they are new. Another thing: the checklist has a point under "Documentation": "Site docs — In fact, the Now, for potential moderate or minor issues:
For example, in agent-editor-test-cases.md, section 3.1 "Minimal agent": This looks correct. In section 3.12 "Duplicate agent", it expects an error for duplicate, which is correct. In failure modes, section 19.8 "Drop model referenced by agent", it says: Expected: Warning about agent references. Model dropped (agent may have dangling reference). This seems to reflect the actual behavior, as per the note in CLAUDE.md about entity access rules, but for models, it might be similar. I don't see any obvious errors. Another thing: in the business-event-test-cases.md, section 4.1 says: "> Note: The This is a helpful note indicating that some features might not be fully implemented, which sets proper expectations. It's good that they documented this limitation. Similarly, in other sections, they might have notes about known issues. This shows that the documentation is honest about current limitations. Now, for the review, since this is documentation, and it seems accurate and well-structured, there might not be critical issues. But let's apply the checklist rigorously. Overlap & duplication:
**MDL syntax Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
5a7d92f to
ebb9965
Compare
AI Code ReviewOkay, let me review this PR. The user is asking me to review a pull request that adds manual test case documentation for all MDL command areas. The PR includes a summary showing 18 test documents with 663 test cases total, and some test execution results showing an 80.5% pass rate with 16 confirmed bugs filed. First, I need to check against the review checklist provided in CLAUDE.md. The PR is titled "docs: add manual test case documentation for all MDL command areas" and the body shows it's adding documentation files under docs/15-testing/. Looking at the checklist items: For "Overlap & duplication":
For "MDL syntax design":
For "Full-stack consistency":
For "Test coverage":
For "Security & robustness":
For "Scope & atomicity":
For "Code quality":
For "Bugs & correctness":
Now, looking at what was actually changed:
All these are new documentation files under docs/15-testing/. The PR summary says it's "Comprehensive manual test case documentation covering all MDL command areas." According to the checklist under "Documentation":
This PR is adding test documentation, which doesn't fall neatly into these categories, but it is documentation. The project has a docs/ directory, and they're adding to docs/15-testing/ which seems appropriate for test documentation. I notice that in the PR body, it says: This seems like good practice for test documentation. One thing to check: the checklist mentions under "Test coverage":
But this PR isn't adding MDL syntax or new executor commands - it's adding test documentation. So this doesn't apply. However, I should check if the test documentation itself follows any guidelines. Looking at the AGENT-TESTING.md file, it seems comprehensive and well-structured. Potential issues:
From AGENT-TESTING.md:
| 04 | nanoflow-test-cases.md | Nanoflows | docs/ show agents; | Tool | Purpose | Install | Option 1: Download .mpk from App Gallery, extract with mxclimxcli new --from-mpk EnquiriesManagement.mpk mxcli new
mxcli new MyApp --version 11.8.0
mxcli new MyApp --version 10.24.0 --output-dir ./projects/my-appIt doesn't mention a Let me check if However, this is likely just a typo or outdated information in the documentation, not a fundamental flaw in the PR. The core purpose of the PR - providing test methodology - is sound. Given that this is documentation and not code, minor inaccuracies like this are less critical than they would be in implementation code. The maintainers can correct them during review if needed. Overall, this PR appears to be:
I don't see any critical issues that would block the PR. There might be minor documentation inaccuracies, but those are typical for documentation PRs and can be addressed in comments rather than blocking the PR. Let me formulate my review according to the requested structure: Critical Issues, Moderate Issues, Minor Issues, What Looks Good, Recommendation. Critical Issues: None - this is documentation, not code that could introduce bugs or security vulnerabilities. Moderate Issues:
Automated review via OpenRouter (Nemotron Super 120B) — workflow source |
ce91da0 to
fe96d0a
Compare
Comprehensive test case docs covering all 19 MDL command areas: entity, enumeration, microflow, nanoflow, page, integration, security, navigation, organization, workflow, catalog, tooling, CLI commands, SQL, session, mapping, business event, image collection, and agent/model/KB/MCP. Each file specifies inputs, expected outputs, and edge cases. AGENT-TESTING.md provides shared instructions for AI agents executing these tests. Validated all 1007 test cases against MDLParser.g4 grammar - removed 102 hallucinated tests, fixed 16 syntax errors.
fe96d0a to
27f045a
Compare
Summary
Adds comprehensive manual test case documentation covering all 19 MDL command areas. Each file specifies inputs, expected outputs, and edge cases — no result annotations or bug references (those belong in session reports).
Also adds
AGENT-TESTING.md— shared instructions for AI agents executing these tests (REPL/terminator convention,-cflag usage, tmpdir workflow, result reporting format).Files (20)
docs/15-testing/AGENT-TESTING.md— agent execution guideentity,enumeration,microflow,nanoflow,page,integration,security,navigation-settings,organization,workflow,catalog,tooling,cli-commands,sql-integration,session,mapping,business-event,image-collection,agent-editor)Test Coverage
1007 test cases total across all areas. Doc cleanup removed 37 result annotations and fixed syntax errors discovered during execution.
Execution Report (2026-04-30 + 2026-05-01)
Build: v0.7.0-368-g5a109a0e (rebased on main 59ab2bb)
Project: EnquiriesManagement (Mendix 11.8.0, 281 entities)
Aggregate Results
Runnable pass rate: 706/779 = 91%
Blocked/skipped: 309 (Docker 133 + SQL 110 + prereqs 66)
Issues Filed (62 total)
Critical:
end resourceHigh:
mxcli newexits 0 on download failure #422 XPath GRANT produces empty rulesGRANT ... ON entityaccepts non-existent module roles #399 Non-existent role creates phantom access ruleMedium (18): #388, #391, #395, #401, #402, #414, #416, #417, #418, #419, #470, #472, #473, #476, #477, #478, #479
Enhancement (5): #464, #465, #466, #467, #471
Closed (5): #423 (invalid), #426, #439, #440, #442 (duplicates)
Failed Tests (91)
Click to expand full failure list
sourcekeyword conflictSkipped Tests (309)
Click to expand skip breakdown
Key Findings
/terminator — write operations use/on own line (SQL*Plus convention). All "REPL-blocked" tests resolved.set -eand CI.Remaining