English indexed dispatch
I spent one night testing a set of Codex + skill action short film workflow using SeedD...
I spent one night testing a set of Codex + skill action short film workflow using SeedDance 2.0 API. The goal is not to make a "moving picture", but to make a continuous short p...
I spent one night testing a set of Codex + skill action short film workflow using SeedDance
2.0 API. The goal is not to make a "moving picture", but to make a continuous short play: 15s prelude + 15s main film + 15s follow-up, a total of about 45s. The subject matter is close-quarters action scenes, and the style refers to Hong Kong-style action movies + the fast and furious rhythm of a chase. There are several very practical conclusions from this run. First, don’t rely solely on prompts for character consistency. The most stable way is:
1. First generate a design drawing of the first and last frames.
2. Cut out a clear first frame / end frame.
3. Generate the first video.
4. Then extract the real first frame and last frame from the finished film.
5. Use the "previous video + extracted frames" as a reference to continue generating subsequent videos. In other words, the real continuity anchor is not "I describe the same person in the prompt", but to continue to feed back the generated images. The effective path I used this time was in Topview, which cost me a $29 monthly membership: Then I installed their own skill in Codex to directly i2v transfer Standard, Fast, and SeedDance
2.0. All of them are unstable, and the backend will report unsupported model. What finally worked was to use the current clip video as the reference video, then use the first and last key frames as Image1 / Image2, and clearly write in the prompt: "Start from Image 1 and end with Image
2." This is very critical. Second, the action scene prompt cannot just say "fierce fighting". It needs to be broken down into a timeline: 0-4s: entry, confrontation, and momentum 4-10s: dodge, rush, short-range strike 10-15s: heavy blow, hitting the pillar, landing position Actions should be written as body mechanisms instead of abstract adjectives: - footwork - shoulder pressure - body punch - recoil - cloth deformation - water spray - handheld jolt at impact These words are much more useful than "cinematic, cool, epic". Third, consumption is more controllable than imagined, but it is not cheap either. This time I have 90 credits in my Topview account. Each segment of 15 seconds SeedDance
2.0 Standard actually consumes 18 credits. The three segments total: - Prelude 15s: 18 credits - Main film 15s: 18 credits - Follow-up 15s: 18 credits, a total of 54 credits, resulting in a film of about 45 seconds. That is, 90 credits can probably do 5 segments of 15 seconds, about 75 seconds, not counting rework. If based on the rough calculation of $30.1 / 90 credits on my purchase page this time: - 1 segment of 15s ≈ $6 - 3 segments of 45s ≈ $18 - 1 2-minute short film with at least 8 segments, the theoretical cost is ≈ $48! ! ! (Too expensive) - Consider rework, the actual budget is best calculated at $60-70. Fourth, Higgsfield’s $29/month seems more suitable for high-frequency exploration, but it depends on which bracket you buy. In public information, Higgsfield’s current common saying is: - Pro monthly payment is about $29, about 600 credits/month - Ultimate annual payment is about $29/month, about 1200 credits/month - Whether it supports all models, first and last frames, Seedance, Veo and other functions depends on the specific package. So a rough look: Topview is more like "burning money per view, but SeedDance
2.0 has a direct path and good quality results." Higgsfield is more "suitable for lots of test shots, playing with shot templates, character consistency and style exploration", but you have to confirm whether the corresponding membership unlocks the model and first and last frame control you want. My conclusion: If you are making a short film with a certain theme, Topview + SeedDance
2.0 can be used directly. ==》Sufficient funds If you are exploring a lot of styles, lenses, and character settings, Higgsfield’s monthly membership may be more suitable for early trial and error. But what really saves money is not which platform to choose, but the workflow: do the storyboard first. Do the first and last frames again. Do another 15 seconds of testing. After passing the test, it will be expanded into multiple segments. Each paragraph uses the real output of the previous paragraph as a reference to continue generating. The most expensive thing about AI videos now is not generation, but unplanned random testing. I only made three videos and spent more than half of my 80 monthly points. It’s still a bit expensive. The finished product is good because it has built-in Image2 and SeedDance2. Is there any cheap SD2 transfer?