Need help fixing my Perchance AI story generator

My Perchance AI story isn’t generating the way I expected. The prompts I’m using used to create longer, more coherent stories, but now the output is short, repetitive, and sometimes off-topic. I’m not sure if I’m structuring my prompts wrong or if there’s a better way to format the template. Can anyone explain how to properly set up Perchance AI story prompts or share a working example so I can troubleshoot what I’m doing wrong?

This sounds like a mix of prompt drift and template issues, not Perchance being “broken”.

Here are concrete things to try.

  1. Lock your structure with a template
    Put your story format in the output, not only in your head.

Example:

[style]
Genre: dark fantasy
Tone: serious
POV: third person limited
Length: 1200–1500 words

[story requirements]

  1. One main character
  2. Clear beginning, middle, end
  3. No repetition of the same sentence
  4. Stay on topic: X

[story starts below]
Write a coherent story that follows the style and requirements above.

If your current prompt is vague like “Write a story about…” you often get short, repetitive stuff.

  1. Force length in a concrete way
    Avoid “long”, “detailed”, “more coherent”.
    Use things like:

• “At least 20 paragraphs.”
• “At least 80 sentences.”
• “Minimum 1500 words.”

Then add: “If you reach the word limit before the story ends, continue the story until the conflict is resolved.”

That reduces those abrupt cutoffs.

  1. Remove conflicting instructions
    Common problem: you ask for “short” somewhere and “long” somewhere else, or compressed style plus huge length.

Scan your Perchance code for:

• “short story”
• “quick story”
• “summary”
• “condensed”

Delete or rewrite those if you want length.

  1. Check your Perchance variables
    If you use stuff like:

[storyType]
short story
flash fiction
one paragraph

or:

[style]
random nonsense
stream of consciousness

You will get weird, short, or off topic outputs.

Make your variables more strict. Example:

[storyType]
full length story
multi scene story

[style]
coherent narrative
normal story with clear plot

  1. Add anti repetition rules
    At the end of your prompt, add something like:

• Do not repeat the same sentence.
• Do not restate the same idea more than two times.
• Do not loop the ending. End once.

This helps with those “stuck in a loop” endings.

  1. Fix off topic wander
    Explicitly tell it what to ignore.

Example:

Stay focused on the main plot: X.
Do not introduce new locations or time periods after the halfway point.
Do not change genre.

If you are randomizing themes in Perchance, watch for something like:

[genre]
horror
romance
comedy
nonsense

That “nonsense” or an odd genre often derails output.

  1. Version drift workaround
    If the model behind Perchance changed, old prompts sometimes behave worse.

Take your old prompt, paste into the AI directly (outside Perchance) and test.
If the raw AI gives the same short messy output, the model behavior changed.
If it works fine directly, the issue is your Perchance code or random variables.

  1. Try a “prompt reset”
    Sometimes prompts get bloated over time.

Make a fresh minimal version:

You are a story generator.
Write a 1500 word, coherent story.
Genre: X.
POV: Y.
Beginning, middle, end.
No repetition.
Stay on topic: Z.

Then slowly add your old extras back, one by one.
When the quality drops, you found the part that breaks it.

  1. Example Perchance block
    Super rough example:

storyPrompt:
You are an AI that writes coherent stories.
Write a length story in the genre [genre].
POV: [pov].
Main character: [protagonist].
Central conflict: [conflict].

Requirements:

  1. At least 1500 words.
  2. Clear beginning, middle, and end.
  3. No repeated sentences.
  4. Stay focused on the central conflict.

[story starts below]

full length

This kind of structure keeps things anchored.

If you want, paste your actual Perchance code in the thread. People here love poking at broken generators and finding the one line that ruins everything.

Yeah, what @kakeru said about template & variable issues is solid, but I’d look at a few different angles too, because Perchance + LLMs can go sideways in more subtle ways than “prompt too vague.”

1. Your randomization might be too random now
If your generator has grown over time, there’s a decent chance the problem is actually:

  • A new variable got added that occasionally wrecks the prompt
  • A rare branch that used to be harmless now fires more often

Stuff like:

[detailMode]
very short
super detailed
quick summary
normal detail

Even if “very short” only shows up 10% of the time, that’s enough to start noticing “why is everything tiny and rushed now?” The fix is boring but effective: comment out whole choice blocks and test.

[detailMode]
// very short
// quick summary
super detailed
normal detail

Run a few test generations. If quality jumps, you found a silent saboteur.

2. Hidden “compression” instructions in your own text
Sometimes the prompt itself teaches the model to be short without you realizing it. Phrases like:

  • “summarize the events of the story”
  • “write a compact but detailed story”
  • “short recap at the end”

Even if you think that applies only to one part, models love to generalize that and compress everything. I’d search your whole Perchance text for:
summary, recap, brief, compact, short.

Instead of:

End with a short recap of the story.

Try:

End with a final scene that shows the consequences of the story.
Do not summarize or recap. Continue writing in full narrative style.

That alone can stop the “ending as a tiny summary” behavior.

3. Story “framing” might be hijacking the tone & length
If you use meta stuff like:

You are an AI assistant that writes stories.
Write a response for the user.

I’d argue that’s part of your trouble. That invites chatty, short, “assistant-style” answers. I actually disagree a bit with heavy system-style wording. Instead, lean into pure in-world framing:

Write a full narrative story as if it is a complete piece of fiction for a reader, not a chat reply.
Do not include explanations, notes, or headers.

And remove mentions of “assistant,” “chat,” “response,” “reply,” “Q&A,” etc.

4. Check for Perchance concatenation issues
Easy to miss: if your final prompt is composed like:

storyPrompt:
[style][plot][extras]

and you accidentally removed a space or newline, you might be feeding a glob like:

“Write a long storyGenre: horrorPOV: third person…”

Models can handle some junk, but garbled formatting often leads to low-quality, repetitive output. Try forcing newlines:

storyPrompt:
[style]
[plot]
[extras]

Or add explicit line breaks:

storyPrompt:
[style]\n[plot]\n[extras]

Then test in the target AI directly to see how clean the final prompt looks.

5. Your length requirement might be undercut by an earlier instruction
LLMs tend to obey the last relevant instruction most strongly. If you have:

Write a short story about [topic].

...

At least 1500 words.

The model might still latch on “short story” as the genre label. I’d strip genre terms like “short story,” “flash,” “drabble” entirely and just say:

Write a complete story with a beginning, middle, and end.

And move your length constraints to the bottom, but also rewrite any earlier phrases like “short” or “quick” instead of just adding long requirements later.

6. Repetition can come from your own boilerplate
Not talking about sentence repetition inside the story. Look for repeated phrases in the prompt itself that might be echoing:

  • “Write a story about X” in 3 different places
  • “The story should be dark and mysterious” then “Keep the story dark and mysterious” etc.

LLMs sometimes mirror phrasing. I’d keep it more “one strong instruction, once” instead of stacked paraphrases.

7. Check for late random injections that derail focus
If you have something like:

[twist]
In the final paragraph, reveal it was all a dream.
Introduce a sudden fourth-wall break.
Change the genre to comedy at the end.

That kind of thing often causes the model to shorten or rush the middle, because it is trying to sprint to the twist. Instead of “in the final paragraph,” try:

Near the end of the story, after the main conflict is resolved, include [twist].

And avoid “change the genre” entirely. That’s practically an invitation to go off-topic and tonal whiplash.

8. Debug like a programmer, not like a writer
This is tedious, but it works way better than guessing:

  1. Copy your full Perchance-resolved prompt into the AI once. Save result.

  2. Then start removing sections of your prompt and test again:

    • Remove your twist block
    • Remove your flavor / meta comments
    • Remove your style randomizer
  3. As soon as the story becomes long, coherent, and on-topic again, you know which chunk was poisoning the well.

I’d honestly start with:

  • Remove any explicit “short/brief/summary” terms
  • Remove meta chat framing
  • Stop using format like “short story / micro-fiction” at all
  • Clean up random variables that reduce length or add nonsense

If you want, paste the final prompt that Perchance is actually sending (after variables are filled, not the template) and people can point at exact lines that are causing the weird behavior.