I've done similar shenanigans before. That main loop is probably simplified? It won't work well with anything that uses timing primitives for debouncing (massively slowing such code down, only progressing with each frame). Also a setInterval with, say 5ms may not "look" the same when it's always 1000/fps milliseconds later instead (if you're capturing at 24fps/30fps, that would be a huge difference).
What you should do is put everything that was scheduled on a timeline (every setTimeout, setInterval, requestAnimationFrame), then "play" through it until you arrive at the next frame, rather than calling each setTimeout/setInterval callback only for each frame.
Also their main loop will let async code "escape" their control. You want to make sure the microtask queue is drained before actually capturing anything. If you don't care about performance, you can use something like await new Promise(resolve => setTimeout(resolve, 0)) for this (using the real setTimeout) before you capture your frame. Use the MessageChannel trick if you want to avoid the delay this causes.
For correctness you should also make sure to drain the queue before calling each of the setTimeout/setInterval callbacks.
I'm leaning towards that code being simplified, since they'd probably have noticed the breakage this causes. Or maybe, given that this is their business, their whole solution is vibe-coded and they have no idea why it's sometimes acting strange. Anyone taking bets?
Crazy that this approach seems to be the preferred way to do it. How hard would it be to implement the recording in the browser engine? There you could do it perfectly, right?
I did this a few years ago. The approach these guys are taking is kinda hacky compared to other better ways - and I've tried most of them.
It works but only in a limited way there's lots of problems and caveats that come up.
I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
I ended up writing code that modified chromium and grabbed the frames directly from deep in the heartof the rendering system.
It was a big technical challenge and a lot of fun but as I say, fairly pointless.
And there are other solutions that are arguably better - like recording video with OBS / the GPU nvenc engine / with a hardware video capture dongle and there's other ways too that are purely software in Linux that work extremely well.
You can see some of the results I got from my work here:
> The core issue is that browsers are real-time systems. They render frames when they can, skip frames under load, and tie animations to wall-clock time. If your screenshot takes 200ms but your animation expects 16ms frames, you get a stuttery, unwatchable mess.
But by faking the performance of your webpage, maybe you are lying to your potential users too?
> But by faking the performance of your webpage, maybe you are lying to your potential users too?
I think you're missing the point of it a little. The "user" is someone who wants to watch a rendered video of the brower's display, but if it takes longer than one frame (where you read the word frame in this comment, think of a frame of video or film, not a browser "frame" like people used to make broken menus with) to actually draw the visual the browser will skip it.
Instead this appears to just tell the browser it's got plenty of time, keep drawing, and then capture the output when it's done.
It's not too different to how you'd do for example stop motion animation - you'd take a few minutes to pose each figure and set up the scene, trip the shutter, take a few more minutes to pose each figure for the next part of each movement, trip the shutter again, and so on. Say it took five minutes to set up and shoot each frame then one second of film would take an hour of solid work (assuming 12 frames per second, or "shooting on twos").
It's just saying "take all the time you want, show me it when it's done" and then worrying about making it into smooth video after the work is done.
> The "user" is someone who wants to watch a rendered video of the brower's display
While such a person might indeed exist, I think the more common situation is a vendor showing a demo of how a website might work. In that situation the consumer wants a realistic depiction of someone interacting with the site. Though of course for the user of the video service it might be very useful if the video hides all manner of performance issues.
This post smells of LLM throughout. Not just the structure (many headings, bullet lists), but the phrasing as well. A few obvious examples:
- no special framework. No library buy-in. Just a URL
- Advance clock. Fire callbacks. Capture. Repeat. Every frame is deterministic, every time.
- We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale.
- The fundamental insight that you could monkey-patch browser time APIs and combine that with Chrome's deterministic rendering mode to capture arbitrary web pages frame-by-frame is genuinely clever
- Where we diverged
The whole post is like this, but these examples stand out immediately. We haven't quite collectively put a name on this style of writing yet, but anyone who uses these tools daily knows how to spot it immediately.
I'm okay with using LLMs as editors and even drafters, but it's a sign of laziness and carelessness when your entire post feels written by an LLM and the voice isn't your own.
It feels inauthentic and companies like replit should consider the impact on their brand before just letting people write these kind of phoned-in blog posts. Especially after the catastrophe that was the Cloudflare Matrix incident (which they later "edited" and never owned up to).
And the lede is buried at the very end: This is just a vibe-coded modification of https://github.com/Vinlic/WebVideoCreator, and instead of making their changes open source since they're "standing on the shoulders of giants", the modifications are now proprietary.
In the end, being an AI company is no excuse for bad writing.
What you should do is put everything that was scheduled on a timeline (every setTimeout, setInterval, requestAnimationFrame), then "play" through it until you arrive at the next frame, rather than calling each setTimeout/setInterval callback only for each frame.
Also their main loop will let async code "escape" their control. You want to make sure the microtask queue is drained before actually capturing anything. If you don't care about performance, you can use something like await new Promise(resolve => setTimeout(resolve, 0)) for this (using the real setTimeout) before you capture your frame. Use the MessageChannel trick if you want to avoid the delay this causes.
For correctness you should also make sure to drain the queue before calling each of the setTimeout/setInterval callbacks.
I'm leaning towards that code being simplified, since they'd probably have noticed the breakage this causes. Or maybe, given that this is their business, their whole solution is vibe-coded and they have no idea why it's sometimes acting strange. Anyone taking bets?
It works but only in a limited way there's lots of problems and caveats that come up.
I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
I ended up writing code that modified chromium and grabbed the frames directly from deep in the heartof the rendering system.
It was a big technical challenge and a lot of fun but as I say, fairly pointless.
And there are other solutions that are arguably better - like recording video with OBS / the GPU nvenc engine / with a hardware video capture dongle and there's other ways too that are purely software in Linux that work extremely well.
You can see some of the results I got from my work here:
https://www.youtube.com/watch?v=1Tac2EvogjE
https://www.youtube.com/watch?v=ZwqMdi-oMoo
https://www.youtube.com/watch?v=6GXts_yNl6s
https://www.youtube.com/watch?v=KzFngReJ4ZI
https://www.youtube.com/watch?v=LA6VWZcDANk
In the end if you want to capture browser video - use OBS or ffmpeg with nvenc or something - all the fancy footwork isn’t needed.
But by faking the performance of your webpage, maybe you are lying to your potential users too?
I think you're missing the point of it a little. The "user" is someone who wants to watch a rendered video of the brower's display, but if it takes longer than one frame (where you read the word frame in this comment, think of a frame of video or film, not a browser "frame" like people used to make broken menus with) to actually draw the visual the browser will skip it.
Instead this appears to just tell the browser it's got plenty of time, keep drawing, and then capture the output when it's done.
It's not too different to how you'd do for example stop motion animation - you'd take a few minutes to pose each figure and set up the scene, trip the shutter, take a few more minutes to pose each figure for the next part of each movement, trip the shutter again, and so on. Say it took five minutes to set up and shoot each frame then one second of film would take an hour of solid work (assuming 12 frames per second, or "shooting on twos").
It's just saying "take all the time you want, show me it when it's done" and then worrying about making it into smooth video after the work is done.
While such a person might indeed exist, I think the more common situation is a vendor showing a demo of how a website might work. In that situation the consumer wants a realistic depiction of someone interacting with the site. Though of course for the user of the video service it might be very useful if the video hides all manner of performance issues.
- no special framework. No library buy-in. Just a URL
- Advance clock. Fire callbacks. Capture. Repeat. Every frame is deterministic, every time.
- We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale.
- The fundamental insight that you could monkey-patch browser time APIs and combine that with Chrome's deterministic rendering mode to capture arbitrary web pages frame-by-frame is genuinely clever
- Where we diverged
The whole post is like this, but these examples stand out immediately. We haven't quite collectively put a name on this style of writing yet, but anyone who uses these tools daily knows how to spot it immediately.
I'm okay with using LLMs as editors and even drafters, but it's a sign of laziness and carelessness when your entire post feels written by an LLM and the voice isn't your own.
It feels inauthentic and companies like replit should consider the impact on their brand before just letting people write these kind of phoned-in blog posts. Especially after the catastrophe that was the Cloudflare Matrix incident (which they later "edited" and never owned up to).
And the lede is buried at the very end: This is just a vibe-coded modification of https://github.com/Vinlic/WebVideoCreator, and instead of making their changes open source since they're "standing on the shoulders of giants", the modifications are now proprietary.
In the end, being an AI company is no excuse for bad writing.