You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm pretty excited about Mojo's future (as I write about here). But, in short, there are some deal-breakers that mean that I can't use Mojo (yet) for light-speed-io. The deal-breakers that I've found are:
Mojo can't (yet) call C or C++ code. So Mojo wouldn't be able to call liburing, or compression libraries. C/C++ interop is on Mojo's roadmap, though.
(I understand that Mojo can call Python libraries. But I don't want the performance hit of spinning up a CPython interpreter just to call into a Python compression library!)
On top of these "deal-breakers" are a bunch of technical questions about Mojo:
How does Mojo's parallelize function distribute tasks across CPU cores? Does it spin up a pool of worker threads, one thread per CPU core, and use work stealing to distribute the tasks across threads? Or does it spin up one OS thread per task (which is bad if we have 1 million tasks!). Or use a simple, blocking queue to distribute tasks to a thread pool (also bad) (I've asked on Mojo's discussion forum)
Even if Mojo could bind to C/C++, I don't think I've got the time to write bindings for io_uring, zstd, etc. etc.
Logging?
Debugger?
Performance profiling?
And a bunch of non-technical questions:
Will Mojo endure? What happens if Modular goes bust (although Modular have raised something like $130 million!)
How will Mojo be governed, long term?
Will Mojo be popular with the community? Will it be used by lots of projects?
Will Mojo become a general-purpose language?
It feels like Modular's business model is largely about speeding up ML inference. Will Mojo's developers be given the time to develop features which don't directly help Modular's business?
Have any other early-stage VC-funded startups successfully created an entire new programming language?
However, longer-term, I'm interested in helping to speed up computation on large, out-of-core, labelled, multi-dimensional data. And, for this, I'm excited about Mojo's "kernel fusion" and support for a range of hardware accelerators. My hope is that Mojo's "kernel fusion" could also optimise long data science "queries" that users want to run against multi-dimensional datasets. I've asked here about whether Mojo will help speed up non-ML workloads.
(Although, of course, Rust performs a bunch of optimisations. And has a bunch of work on SIMD, including std::simd)
questionFurther information is requestedperformanceImprovements to runtime performance
1 participant
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm pretty excited about Mojo's future (as I write about here). But, in short, there are some deal-breakers that mean that I can't use Mojo (yet) for
light-speed-io
. The deal-breakers that I've found are:liburing
, or compression libraries. C/C++ interop is on Mojo's roadmap, though.On top of these "deal-breakers" are a bunch of technical questions about Mojo:
io_uring
,zstd
, etc. etc.And a bunch of non-technical questions:
However, longer-term, I'm interested in helping to speed up computation on large, out-of-core, labelled, multi-dimensional data. And, for this, I'm excited about Mojo's "kernel fusion" and support for a range of hardware accelerators. My hope is that Mojo's "kernel fusion" could also optimise long data science "queries" that users want to run against multi-dimensional datasets. I've asked here about whether Mojo will help speed up non-ML workloads.
(Although, of course, Rust performs a bunch of optimisations. And has a bunch of work on SIMD, including
std::simd
)Beta Was this translation helpful? Give feedback.
All reactions