-
Notifications
You must be signed in to change notification settings - Fork 159
feat: generate sample data CTA #8404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1c77908 to
0df155f
Compare
Concerns with
|
|
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UXQA:
- I'd expect
enterwould submit the form. - After submitting the AI request, I pressed "cancel" in the UI and nothing happened. The cancellation didn't seem to work, as I waited and the AI generated a valid model.
- After creating sample data through the Welcome screen, I tried creating more sample data through the "Add > More > Sample Data" option. I used a slightly different prompt. My first file was overwritten. To be safe, we should always generate a new file, not overwrite any existing files.
- Upon returning to the "Add > More > Sample Data" dialog, I'd expect my prior prompt to have been cleared.
- Upon the data being created, I see two notifications: the dialog and the toast notification. We should only have one. Let's remove the toast notification.

- I deleted the
modelsdirectory, and I tried to generate more sample data. Nothing happened. The generation errored, yet I didn't see any error notification. I see this error in the event stream: "open /models/hello_world.yaml no such file or directory"
|
I'm trying to test this but everytime I kick off a sample data generation request, it leads to a "failed to generate sample data" toast notification |
|
Thanks @ericpgreen2 . I have addressed your comments. Regarding the forms library I am guessing you are talking about using @ericokuma You need to have a valid rill cloud. Make sure to run |
Yes! Apologies that part of the Claude response slipped by me.
I believe I did. I'll check again. |
Architectural consideration: Feature-specific prompt locationExisting PatternLooking at how "explain anomaly" works ( // Prompt logic lives in the FEATURE directory
public startAnomalyExplanationChat(metricsView: string) {
const prompt = `Explain what drives ${measureMention}, ${timeRangeMention}...`;
sidebarActions.startChat(prompt); // Opens chat, user sees conversation
}Key principle: Feature-specific prompts live in their feature directories, not in Current PRThe sample data generation logic is in RecommendationMove to a dedicated feature directory like This keeps On UX ApproachI understand the current product requirements call for an overlay-based UX (user doesn't see the conversation). That's fine for now. However, if future work aligns this with the "explain anomaly" pattern (visible conversations via For Future PromptsAs more feature-specific prompts are added (metrics views, dashboards, etc.), they should follow the same principle: This keeps domain logic with its domain, and chat as pure infrastructure. Developed in collaboration with Claude Code |
Event Bus:
|
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
083ad06 to
f770d79
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good with the PR, as long as my above UXQA item 2 is addressed. Additionally, we discussed putting a "Beta" tag next to the feature CTAs.
I'll leave @nishantmonu51 to approve the PR tomorrow AM IST.
runtime/ai/develop_model.go
Outdated
|
|
||
| if args.Create { | ||
| entries, err := t.Runtime.ListFiles(ctx, session.InstanceID(), args.Path+"*") | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| args.Path = fileutil.GetPath(args.Path, entries) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the use case for this?
The Path passed to DevelopModel should ideally already be a correct .yaml path. If the developer_agent doesn't set that correctly, we should just return a useful error so it can self-correct. And if it does set it correctly, then I don't think we need the Create flag – it's already implied in the description for Path:
Path string `json:"path" jsonschema:"The path of a .yaml file in which to create or update a Rill model definition."`
note "in which to create or update"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updating the prompt doesnt guarantee that the AI doesnt overwrite existing models, that is why I added this as a safeguard. Since we are tagging this as beta ok to not worry too much? cc @nishantmonu51
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Most of the backend changes have been reverted
* feat: generate sample data CTA * Add a button in add asset dropdown * Cleanup * PR comments * Add safeguard for generating new models * Use the correct form library * Move to features/sample-data * Improve UnpackEmpty behaviour * Update prompt to have a unique name * Wait until model is done compliling * Remove the _number suffix * Merge issues


Adds CTA to generate sample data using AI. There is on in the welcome screen and add asset under
morein file explorer.Closes APP-607
Checklist: