本文介绍了MarkLogic 8-将大型结果集流式传输到文件-JavaScript-Node.js客户端API的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一个查询,该查询将返回非常大的响应.可能有成千上万条记录,甚至可能有数千兆字节的数据.

Let's say I have a query that is going to return a very large response. Possibly thousands of records and possibly gigabytes of data.

通常,在用户界面中,我们只显示此数据的一页.现在,我需要一个选项来获取整个结果集并将其流式传输到文件中.然后,用户可以随意下载此文件.

Normally in the UI, we just show a single page of this data. Now I need an option to take the entire result set and stream it out to a file. Then the user can go download this at their leisure.

那么我该如何使用查询生成器从查询中选择所有结果,然后将其分块流式传输到文件中而又不会耗尽内存?

So how do I select all results from a query using query builder and then stream it out to a file in chunks without running out of memory?

推荐答案

如果需要文档描述符,则可以打开对象流,如以下示例所示:

If you want the document descriptors, you can open an object stream as in the following example:

https://github.com. com/marklogic/node-client-api/blob/develop/examples/query-builder.js#L38

如果只需要文档的内容,则可以使用分块流,如以下示例所示(可以将相同的方法用于查询):

If you only want the content of the documents, you can use a chunked stream as shown in the following example (the same approach can be used for a query):

https://github.com. com/marklogic/node-client-api/blob/develop/examples/read-stream.js#L27

一般方法如下:

  • 以写流的形式打开目标文件

https://nodejs.org/api/fs.html#fs_fs_createwritestream_path_options

  • 查询文档的第一页,将文档的读取流传输到文件的写入流,注意将end选项设置为false:

https://nodejs.org/api/stream.html#stream_可读_pipe_destination_options

  • 循环阅读文档,将起始页面增加页面长度,直到完成阅读

  • loop on reading documents, incrementing the start page by the page length until finished reading

在写入流上调用end()以关闭文件

call end() on the write stream to close the file

https://nodejs.org/api/stream.html#stream_writable_end_chunk_encoding_callback

希望有帮助

这篇关于MarkLogic 8-将大型结果集流式传输到文件-JavaScript-Node.js客户端API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-27 21:01