Replies: 1 comment 3 replies
-
According to the NodeJS documentation https://nodejs.org/api/buffer.html#class-blob.
I don't find the use of Blob desirable. I have another idea to share, why not just add an If users want the simplicity of blobs, there's nothing stopping us from creating a new BlobDownloader class that relies on the |
Beta Was this translation helpful? Give feedback.
-
1.0 was released but it was a release intended to be backyards compatible with 0.17 with few modifications. For 2.0 I want to rewrite the library to fix design issues:
API exports
Currently it works like this:
There are some issues: the
File
conflicts with the globalFile
object, having to log-in usingnew Storage()
is not intuitive and the usage of large classes make tree shaking less efficient as desirable. If you want to make a webpage that loads the name of the last uploaded file in a folder, something like "Last update: filename.ext", it will load a lot of unneeded methods, even using tree shaking.Proposed exports:
No more
File
conflicts, and export names would be more intuitive. Also, separatingupload
anddownload
from the rest of the library will make tree-shaking way more efficient and will make the library more flexible, since the sameupload
function used when logged-in could be used to upload files in MEGAdrop folders.The only issue is that switching from
file.download(opts)
todownload(file, opts)
may look weird: why not just class methods, right? Well, because those are huge and complicated methods. As for me is the same thing as when in Vue 3 some API methods were moved to be tree-shakable. As it explains well, because "[...] tree-shaking, which is a fancy term for 'dead code elimination.'"Simplify the upload method
The current upload method is way to complicated: it accepts buffers, anything that
Buffer.from
accepts and Node readable streams, and also return a writable stream. Sure, internally it's just the writable stream and the input is just written to it, but, since Mega's API requires a file size before upload starts and also requires uploads to be chunked, is it the best way to handle that?One way to handle it is using blobs since their interface make chunking way more simple: just slice the blob to the chunk start and end then send it. Chunk errors could easily be retried by just reading the same blob again. Since streams are mostly used to reduce memory consumption when dealing with huge files using blobs would be a good thing. In the other hand Blobs are still experimental in Node and don't looks like there is a way to create a blob that references to a file in Node and Deno without loading those entirely in memory at the moment.
It sure the state of blob will change in future, making it way more desirable.
Improve downloading
In order to download you must need to call Mega's API to get a download URL then download from this URL. At the moment if you want to read files with random access (like for streaming videos or reading archive files) the library do not allow doing that first API call once, instead it repeats that call for each read. That's issue #30.
One way to improve it is making
download
return not a Buffer or a stream, but a object that would store the download URL. Taking jimmywarting suggestion would be a good thing if this object works like blobs:Other improvements
Finally, 2.0 don't need to be rushed. I prefer waiting for Blob and fetch support become at least a bit more stable in Node first.
If someone have suggestions for 2.0 - i.e. those that require major changes in the library - I will edit this post to include them. Suggestions for the next minor releases will be discussed in other threads.
Beta Was this translation helpful? Give feedback.
All reactions