LCORE-2072: Add SkillsConfiguration model to config file#1736
Conversation
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (5)
WalkthroughThis PR adds a skills configuration feature to the Lightspeed Core Service. It introduces a ChangesSkills Configuration Feature
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| def test_mixed_absolute_and_relative_paths(self) -> None: | ||
| """Test that both absolute and relative paths can be mixed.""" | ||
| config = SkillsConfiguration( | ||
| paths=["/var/skills", "./local-skills", "/opt/skills"] | ||
| ) | ||
| assert len(config.paths) == 3 | ||
| assert "/var/skills" in config.paths | ||
| assert "./local-skills" in config.paths | ||
| assert "/opt/skills" in config.paths |
There was a problem hiding this comment.
@radofuchs what do you think about this test? From the code's perspective, absolute/relative doesn't matter. It's just "str" at the end of the day - but I added this coz it signals that we'll work with both absolute and relative path - ie this is my attempt at BDD :)
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/models/config.py`:
- Around line 1932-1936: Add validation for the paths field by adding Pydantic
validators for the paths attribute: implement a `@field_validator`("paths",
mode="before", each_item=True) that strips each string and raises ValueError for
blank/whitespace entries, and implement a second `@field_validator`("paths",
mode="after") that deduplicates the list while preserving order (e.g., using an
ordered set pattern) and returns the normalized list; reference the existing
paths: list[str] Field declaration and ensure all validators are annotated and
use Pydantic v2 style `@field_validator` for the "paths" field.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1ec5373b-046b-4cad-bd80-096b14aafda0
📒 Files selected for processing (3)
examples/lightspeed-stack-skills.yamlsrc/models/config.pytests/unit/models/config/test_skills_configuration.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🧰 Additional context used
📓 Path-based instructions (3)
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
tests/**/*.py: Use pytest for all unit and integration tests; do not use unittest
Usepytest.mark.asynciomarker for async tests
Files:
tests/unit/models/config/test_skills_configuration.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Llama Stack imports: Usefrom llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules must start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
All functions must have complete type annotations for parameters and return types, use modern syntax (str | int), and include descriptive docstrings
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead of modifying function parameters
Useasync deffor I/O operations and external API calls
Use standard log levels with clear purposes:debug()for diagnostic info,info()for program execution,warning()for unexpected events,error()for serious problems
All classes must have descriptive docstrings explaining purpose and use PascalCase with standard suffixes:Configuration,Error/Exception,Resolver,Interface
Abstract classes must use ABC with@abstractmethoddecorators
Follow Google Python docstring conventions with required sections: Parameters, Returns, Raises, and Attributes for classes
Files:
src/models/config.py
src/models/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models must use
@model_validatorand@field_validatorfor validation and complete type annotations for all attributes, avoidingAnytype
Files:
src/models/config.py
🧠 Learnings (2)
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/config.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/config.py
🔇 Additional comments (3)
src/models/config.py (1)
2099-2103: LGTM!tests/unit/models/config/test_skills_configuration.py (1)
1-48: LGTM!examples/lightspeed-stack-skills.yaml (1)
1-32: LGTM!
| paths: list[str] = Field( | ||
| default_factory=list, | ||
| title="Skill paths", | ||
| description="Paths to skill directories or directories containing skill subdirectories.", | ||
| ) |
There was a problem hiding this comment.
Add validation for individual skills.paths entries.
skills.paths currently accepts blank/whitespace and duplicate values, which can lead to ambiguous or invalid path handling later. Add a field validator to normalize and reject invalid entries.
Suggested patch
class SkillsConfiguration(ConfigurationBase):
@@
paths: list[str] = Field(
default_factory=list,
title="Skill paths",
description="Paths to skill directories or directories containing skill subdirectories.",
)
+
+ `@field_validator`("paths")
+ `@classmethod`
+ def validate_paths(cls, value: list[str]) -> list[str]:
+ """Normalize and validate configured skill paths."""
+ seen: set[str] = set()
+ normalized_paths: list[str] = []
+ for path in value:
+ normalized = path.strip()
+ if not normalized:
+ raise ValueError("Skill paths must not contain empty values")
+ if normalized in seen:
+ raise ValueError(f"Duplicate skill path: '{normalized}'")
+ seen.add(normalized)
+ normalized_paths.append(normalized)
+ return normalized_pathsAs per coding guidelines: src/models/**/*.py: “Pydantic models must use @model_validator and @field_validator for validation and complete type annotations for all attributes, avoiding Any type”.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/models/config.py` around lines 1932 - 1936, Add validation for the paths
field by adding Pydantic validators for the paths attribute: implement a
`@field_validator`("paths", mode="before", each_item=True) that strips each string
and raises ValueError for blank/whitespace entries, and implement a second
`@field_validator`("paths", mode="after") that deduplicates the list while
preserving order (e.g., using an ordered set pattern) and returns the normalized
list; reference the existing paths: list[str] Field declaration and ensure all
validators are annotated and use Pydantic v2 style `@field_validator` for the
"paths" field.
| # Skills provide domain-specific instructions and reference materials | ||
| # that the LLM can load on demand when relevant to the current task | ||
| skills: | ||
| paths: |
There was a problem hiding this comment.
is there a reason that we need the paths? It would make sense if we had other data fields under skills but if paths is the only one then I think skills can just be a list. wdyt?
There was a problem hiding this comment.
Yea I think that makes total sense. I was going off of what's here https://github.com/lightspeed-core/lightspeed-stack/blob/main/docs/design/agent-skills/agent-skills.md#configuration
but I do remember this discussion and AFAIR we'd decided we don't need path. I'll update the design doc too
Based on further discussions:
It's a little weird to look at now, but the current layout is the safest approach - I can't think of anything else that might be needed under the skills tab (eg, settings etc), but keeping it tabbe-ed future proofs it so keeping it as is
Adds the `SkillsConfiguration` Pydantic model to enable configuring skill directory paths in `lightspeed-stack.yaml`. This is the first step in implementing Agent Skills support. For more info, refer to `docs/design/agent-skills/agent-skills.md`. **Scope**: This PR adds only the configuration model. Runtime skill loading (`load_skills()`, frontmatter parsing, tool registration) will be implemented in follow-up commits. Signed-off-by: Anik Bhattacharjee <anbhatta@redhat.com>
ac7fa4d to
7203706
Compare
Description
Adds the
SkillsConfigurationPydantic model to enable configuring skill directory paths inlightspeed-stack.yaml. This is the first step in implementing Agent Skills support.For more info, refer to
docs/design/agent-skills/agent-skills.md.Scope: This PR adds only the configuration model. Runtime skill loading (
load_skills(), frontmatter parsing, tool registration) will be implemented in follow-up commits.Type of change
Tools used to create PR
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
New Features
Tests