I have some frontend use cases for rust that I just ended up rewriting in typescript because transferring and loading the wasm rust blob is more expensive than running the program.
I imagine wasm-conscious optimizations would look a lot like targeting microcontrollers, but with weird escape hatches to high-level browser apis.
> The default hashing algorithm is not specified, but at the time of writing the default is an algorithm called SipHash 1-3. This algorithm is high quality—it provides high protection against collisions—but is relatively slow, particularly for short keys such as integers.
> An attempt to switch from fxhash back to the default hasher resulted in slowdowns ranging from 4-84%!
I/O
> Rust’s print! and println! macros lock stdout on every call. If you have repeated calls to these macros it may be better to lock stdout manually.
Build times
> If you use dev builds but don’t often use a debugger, consider disabling debuginfo. This can improve dev build times significantly, by as much as 20-40%.
Interesting std library alternatives
> If you have many short vectors, you can use the SmallVec type from the smallvec crate. SmallVec<[T; N]> is a drop-in replacement for Vec that can store N elements within the SmallVec itself, and then switches to a heap allocation if the number of elements exceeds that.
> If you have many short vectors and you precisely know their maximum length, ArrayVec from the arrayvec crate is a better choice than SmallVec. It does not require the fallback to heap allocation, which makes it a little faster.
> The SmallString type from the smallstr crate is similar to the SmallVec type.
I doubt I'll change my use of the standard types often, but this is good information to know for cases where this might be applicable.
Advice on enums
> If an enum has an outsized variant, consider boxing one or more fields.
I'm surprised I didn't see any advice about skipping proc macros or Serde for faster compile times.
Most of these compile time improvements seem to be more along the lines of drop-in changes that don't require larger refactors. Removing something like serde from a codebase that makes use of it generally is going to be a lot more work.
If you're referring to serde being brought in by a dependency when you don't need it, most well-behaved crates should already have this be something you opt into by specifying the feature rather than something you need to go out of your way to enable. That said, I've had a theory for a while now that when Rust projects end up suffering from long compile times, the most significant cause is unneeded code from dependencies getting compiled, and that the poor ergonomics around Cargo features have basically encouraged the opposite of the good behavior I described above. I've still almost never seen this discussed outside of when I bring it up, so I wrote up my thoughts on it in a blog post a while back rather than try to restate my case every time, but I don't have much hope that anyone will take it seriously enough to either convince me I'm wrong or do anything about it: https://saghm.com/cargo-features-rust-compile-times/
I have some frontend use cases for rust that I just ended up rewriting in typescript because transferring and loading the wasm rust blob is more expensive than running the program.
I imagine wasm-conscious optimizations would look a lot like targeting microcontrollers, but with weird escape hatches to high-level browser apis.
Some TILs:
Hashing
> The default hashing algorithm is not specified, but at the time of writing the default is an algorithm called SipHash 1-3. This algorithm is high quality—it provides high protection against collisions—but is relatively slow, particularly for short keys such as integers.
> An attempt to switch from fxhash back to the default hasher resulted in slowdowns ranging from 4-84%!
I/O
> Rust’s print! and println! macros lock stdout on every call. If you have repeated calls to these macros it may be better to lock stdout manually.
Build times
> If you use dev builds but don’t often use a debugger, consider disabling debuginfo. This can improve dev build times significantly, by as much as 20-40%.
Interesting std library alternatives
> If you have many short vectors, you can use the SmallVec type from the smallvec crate. SmallVec<[T; N]> is a drop-in replacement for Vec that can store N elements within the SmallVec itself, and then switches to a heap allocation if the number of elements exceeds that.
> If you have many short vectors and you precisely know their maximum length, ArrayVec from the arrayvec crate is a better choice than SmallVec. It does not require the fallback to heap allocation, which makes it a little faster.
> The SmallString type from the smallstr crate is similar to the SmallVec type.
I doubt I'll change my use of the standard types often, but this is good information to know for cases where this might be applicable.
Advice on enums
> If an enum has an outsized variant, consider boxing one or more fields.
I'm surprised I didn't see any advice about skipping proc macros or Serde for faster compile times.
If you're referring to serde being brought in by a dependency when you don't need it, most well-behaved crates should already have this be something you opt into by specifying the feature rather than something you need to go out of your way to enable. That said, I've had a theory for a while now that when Rust projects end up suffering from long compile times, the most significant cause is unneeded code from dependencies getting compiled, and that the poor ergonomics around Cargo features have basically encouraged the opposite of the good behavior I described above. I've still almost never seen this discussed outside of when I bring it up, so I wrote up my thoughts on it in a blog post a while back rather than try to restate my case every time, but I don't have much hope that anyone will take it seriously enough to either convince me I'm wrong or do anything about it: https://saghm.com/cargo-features-rust-compile-times/