Skip to content

replace qs by querystring#1

Closed
ghost wants to merge 3 commits intomasterfrom
unknown repository
Closed

replace qs by querystring#1
ghost wants to merge 3 commits intomasterfrom
unknown repository

Conversation

@ghost
Copy link
Copy Markdown

@ghost ghost commented Jan 14, 2014

The native querystring module is fatest than qs module.

See querystring vs qs.

Hope it help.

@jonathanong
Copy link
Copy Markdown
Member

We don't want any API changes. Sorry!

@ghost
Copy link
Copy Markdown
Author

ghost commented Jan 14, 2014

Hey Jonathanong, I really don't see where it change the API...
But ok, no problem ;)

@defunctzombie
Copy link
Copy Markdown

I am in favor of this change. The perf numbers seem to indicate it is worth it. What is the API change here? The return value of the parse method?

@jonathanong
Copy link
Copy Markdown
Member

Nested support. I'm in favor as well but I don't want anyone to complain that there's no more nested support. No one will notice the speed difference.

@defunctzombie
Copy link
Copy Markdown

fair enough

@Fishrock123
Copy link
Copy Markdown
Contributor

Has anyone made nested queries an issue on the node repo? I couldn't find one.

Perhaps we should try to pull support for nesting into node?

@jonathanong
Copy link
Copy Markdown
Member

According to the as qs it was removed from node. I don't think it belongs there either. So many security holes you have to worry about.

@dougwilson
Copy link
Copy Markdown
Contributor

Yea, it breaks the niceness of always knowing a value for a key is a string. Plus I don't think it's actually some kind of RFC.

@tj
Copy link
Copy Markdown
Member

tj commented Jan 14, 2014

those numbers don't mean anything without the # of ops, looks like a micro optimization at best

@ghost
Copy link
Copy Markdown
Author

ghost commented Jan 14, 2014

If I send a request with 1mb of data like that: "firstname=cydgy&lastname=cydgy", your qs module will parse all data. Supporting "maxKeys" is a good security for me.

What do you think?

@jonathanong
Copy link
Copy Markdown
Member

you could also just set a lower request size limit

@dougwilson
Copy link
Copy Markdown
Contributor

@CydGy by the time the data is handed over to qs, the entire query string has already made it into memory, so you need to limit the size at the data event level, using the limit options (as @jonathanong suggested).

@ghost
Copy link
Copy Markdown
Author

ghost commented Jan 14, 2014

Sure, but if you're uploading big files and using connect, you need to change the limit size depending on the url. That's not very pretty I think!
If maxKeys is used, you can set limit to 10mb, it will not parse all data. Parsing take much time.

But it was just an idea for improvment, I agree that there is not real demonstrations.

@dougwilson
Copy link
Copy Markdown
Contributor

This module does not actually parse multipart bodies, which is what file uploads would use, so I'm not sure how the limit for this would affect file uploads, as the limits would be different, or are you stuffing a file base64-encoded into a urlencoded body? Can you should your use-case where specifying a limit to the urlencoded body (what qs is used for) would affect the limit for a multipart body?

@ghost
Copy link
Copy Markdown
Author

ghost commented Jan 14, 2014

Oups, that's true, sorry, for uploading files, you don't use this module. So yes, just set a lower request size limit.

@expressjs expressjs locked and limited conversation to collaborators Jul 20, 2014
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants