Next.js Discord

Discord Forum

how to implement the chatGPT DOM update effect?

Unanswered
Louisiana Waterthrush posted this in #help-forum
Open in Discord
Louisiana WaterthrushOP
Ask a question, on the chatGPT web, with the typewriter effect output, you will find that when it outputs the second line, the first line can already be copied and will not be updated again. It is truly incremental DOM updates.

Imagine implementing something similar with Vue/React:

1. Fetch the GPT API, stream the result incrementally, and each time parse the content containing markdown tags, then text += content (executed once per return)
2. Parse the text into HTML tags: result = markdownIt(text) (executed once per return)
3. v-html/innerHTML = result, rendering the final markdown form of HTML.

Here is a problem: constantly parsing the markdown text into the result will lead to re-rendering the entire HTML output. While the output screen appears incremental, the entire rendered DOM content keeps refreshing, causing the already output content to be unselectable, which results in a poor experience and performance issues. Any optimization suggestions?

20 Replies

Louisiana WaterthrushOP
text += content
result = markdownIt(text)
// then render the whole result

this is a very easy way to do that, but my problem is, when I just render the whole result every time, the whole markdown is always repainted. It's not updated line by line. (but seems like chatGPT web implement that)
@Louisiana Waterthrush js text += content result = markdownIt(text) // then render the whole result this is a very easy way to do that, but my problem is, when I just render the whole result every time, the whole markdown is always repainted. It's not updated line by line. (but seems like chatGPT web implement that)
chatgpt web does not implement this. as you see above, there is a link in progress, chatgpt renders it as plain text "[issue tracker](https:/".

when the link is complete, the frontend makes it a full link. if it really does incremental computation like that, it would have to delete the "[issue tracker](https:/" part and add "[issue tracker](https://actual-link.com)" in. that's technically possible but extremely complicated, since markdown syntax is not as simple as it looks, a markdown line can influence many previous lines.

chatgpt web also becomes slow when the conversation gets long, which further proves there is no magic going on in there. its markdown parser is just performant and its syntax highlighting library lightweight, that's pretty much it.
Louisiana WaterthrushOP
Pay attention to that pre element
@Louisiana Waterthrush Pay attention to that pre element
yes because after the rerender, all elements are the same except the end of the pre element. your point being?
here new markdown tokens being added doesn't change the previous paragraphs so the output is the same
Louisiana WaterthrushOP
so I need to handle the <pre>? seems like chatgpt handle it specialy, when the code streaming , the pre label is very stable
My entire pre tag is flashing and constantly updating, but ChatGPT's pre tag looks very stable, with code only being added at the end, even the pre tag hasn't closed yet.
@Louisiana Waterthrush so I need to handle the <pre>? seems like chatgpt handle it specialy, when the code streaming , the pre label is very stable
oh so you mean just the pre. this one i agree, you need some insane syntax highlighting solution to handle the pre specifically here for it to be performant
Louisiana WaterthrushOP
Let's think about how ChatGPT achieves such stable output with minimal DOM updates. (That's the point)
text += content
result = markdownIt(text)
// then render the whole result

I'm just updating like this, and the experience is very poor.
though i have never attempted to use anything except the heavy shiki for this, so i don't know of more lightweight libs like prism or highlight.js would "just work"
i just use react-markdown here, without any of that text += content shenanigan, and the performance is smooth until at least 10 long messages later
sure, i don't have syntax highlighting, but that's because i tried once with shiki which is heavy af and it was indeed laggy. i never tried with more lightweight libraries
@Louisiana Waterthrush Pay attention to that pre element
Louisiana WaterthrushOP
checkout out the second record here (the tag is flashing when straming, So you can't copy the already outputted code, which is the biggest difference, because it's flickering and constantly updating.)
oh so it's indeed about the pre tag, this one then i have no comment because i haven't done it myself
Louisiana WaterthrushOP
I think tags like pre should be specially handled by the chatGPT website, when the code is streaming, it's very smooth..