Original post

As a long-time Win32 developer, my only answer to that question is “of there is!”

The efficiency difference between native and “modern” web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Yes, computers have gotten faster and memory and disks much larger. That doesn’t mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

For example, IM, video/audio calls, and working with email shouldn’t take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible — simultaneously — with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today — AJAX was around, websites did use JS, but simple things like webchats still didn’t require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn’t, so perhaps it skews their idea of efficiency.

In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.

I recently had to upgrade my RAM because I have Spotify and Slack open all the time. Today RAM is cheap but it is crazy those programs take up so much resources.

Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).

So I absolutely agree with you.

I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while .

The younger programmers may be right that resources don’t matter much because they are cheap and available. But now I had to upgrade my RAM.

I guess the question becomes: what is the native ecosystem missing that means devs are choosing to deliver memory/CPU hungry apps, rather than small efficient ones?

If you don’t mind me asking, how much RAM did you have before, and what did you upgrade to?

I recently got a new PC myself and decided to for 16GB, my previous one (about a decade old) had 8GB and I didn’t feel I really hit the limit, but wanted to be future proof. Because as you said, a lot of ‘modern’ applications are taking up a lot of memory.

I also went from 8GB to 16GB recently (virtual machines are hungry); but I had gotten rid of Slack even before that. I mean, yes, it has round edges and goes ping and has all those cutesy animations – but 2GB of RAM for a glorified IRC client, excuse me, what exactly is it doing with billions of bytes worth of memory? (“Don’t know, don’t care” seems to be its developers’ mantra)

I upgraded from 8 to 16GB. But I’m in the process of ordering a new desktop that will have 32GB.

Spotify and Slack are not problematic as individual programs but since I have a lot of other programs open they are the ones that take up more memory than they should. I mean: Spotify is just a music player. Why does it need 250MB RAM?

I like to imagine 20 years in the future we’ll see articles posted on HN, or whatever the cool kids are reading by then 😉

… articles with titles like:

“Slack in one Ruby statement” a la https://news.ycombinator.com/item?id=23208431

More seriously though, Spotify and Slack are optimised to intentionally be huge time wasters, so it makes sense the organisations that produce them don’t care about performance / efficiency.

You are looking back at the past with rosy goggles.

What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.

AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.

Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.

FWIW i do not remember having issues like that, i had mIRC practically always open, a web browser, email application, etc and i do not remember ever having networking issues.

Internet was slow but that was largely because for the most part of the 90s i was stuck with a very slow 2400 baud modem – i got to appreciate the option that browsers had to not download images by default :-P.

But in general i do not remember being unable to run multiple programs at the same time, even when i was using Windows 3.1 (though with Win3.1 things were a bit more unstable mainly due to the cooperative multitasking).

Me neither, I’m not going to lie and say that I had 40 applications opened, but I DID have 5-10 apps using the web with 0 issues (A browser+IRC App + Email Client+MICQ+MSN Messenger+Kazaam/Napster+Winamp in stream mode).

Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.

And yet, people stopped using it.

I used to use it all the time. Now I use Spotify instead. I’m not sure I want to go back to curating my own collection of mp3’s again.

Sure, but if you could have Spotify as is, or a light weight player, like WinAMP, both with equal access to the Spotify service, which would you pick?

People aren’t using Spotify because the player is fantastic, they use it because Spotify has a huge library, is reasonable price and the player is sort of okay.

Totally agree. But the DRM monster rears its head. Everyone is afraid you’ll steal their choons if you’re allowed to play them on whatever player you like sigh

Mobile phones happened. Tiny memories while always being connected.

In a better world, you really wouldn’t need to. Winamp was great – The weak point was always the playlist editor, but winamp’s interface for simply playing music and seeing what was up next was wonderful. Spotify could provide you with a playlist plugin that simply gave a list of URLs, or let you download one that lasted X hours.

I still use Winamp for offline music. Nothing else is faster.

It may be true that people are partially looking back in rose-tinted glasses, but there’s more than just an inkling of truth to their side. Casey Muratori (game developer for the Witness) has a really good rant [1] about bloat in Visual Studio specifically, where he demonstrates load times & the debugger UI updating today vs on an Pentium 4 running XP. Whether or not you attribute the performance difference to new features in Win10/VS, it’s worth considering the fact that workflows are still being impacted so significantly on modern hardware. We were able to extract 100s of times more out of hardware and gave it up for ???

[1] https://www.youtube.com/watch?v=GC-0tCy4P1U

To be fair, 10-20 years ago was the age of Windows XP and Windows 7, not Windows 95. There barely was anything good about Windows 95, and there are likely not many people missing it, but it was also a complete different era from the later “modern” desktops, hardware as also software-wise. If anything I would call that era the alpha-version, problems included.

I could see Street View-like vistas under a Pentium3/Amd Athlon. On power, I did the same you can do today but with an Athlon XP and Kopete. On video, since BeOS and Mplayer I could multitask perfectly XVid movies good enough for its era.

A Pentium II could barely process DVD-resolution MPEG-2 in realtime.

According to http://www.vogons.org/viewtopic.php?p=423016#p423016 a 350MHz PII would’ve been enough for DVD, and that’s 720×480@30fps; videoconferencing would more commonly use 320×240 or 352×288 which has 1/4 the pixels, and H261 or H263 instead as the codec.

Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I’m not familiar with Zoom as I don’t use it, but it’s very likely you’re not actually receiving and decoding 16 separate video streams; instead a MCU or “mux box” is used to combine the streams from all the other participants into one stream, and they do it with dedicated hardware.

That said, video is one of the cases where increased computing power has actually yielded proportional returns.

Don’t think it can be an MCU box. You can select an individual stream from the grid to make it larger almost instantly. The individual feeds can display both as grid and a horizontal row. I’m assuming they send individual feeds and the client can ask for feeds at different predefined resolutions.

Without having used Zoom much I can’t definitively say how it works, but I’ve used BlueJeans quite a bit and noticed compression artifacts in various parts of the UI (e.g. text underneath each video source). That means BlueJeans is definitely muxing video sources and it really does not have a noticeable delay when changing the view. Since each video is already so compressed I think they can get away with sending you really low bitrate streams during the transition and you’ll barely notice.

Mixed plus whichever feed you request to enlarge sounds more reasonable.

I have an iMac G4 from 2003 (the sunflower things) on which I installed Debian PPC and it is able to stream 720p content from my local network and play it back smoothly on VLC

Most of those have nothing to do with OP’s point, which is that some software uses way too much processing power than it should.

While on the topic, let’s remember the speech recognition software available for Windows (and some for Android 2.x) that was completely offline and could be voice activated with, gasp, any command!

Google with its massive data centers can only do “OK/Hey Google”. Riiight. I can’t believe there are actually apologists for this bs.

You’re talking about different concepts.

Voice recognition used by things like Google Assistant, Siri, Cortana, and Alexa usually relies on a “wake word”, where it’s always listening to you, but only starts processing when it is confident you’re talking to it.

Older speech recognition systems were either always listening and processing speech, or only started listening after you pressed a button.

The obvious downside of the older systems is that you can’t have them switched on all the time.

Do you mean Dragon Speech or how it was called?

Anyway, old speech recognition software was quite horrible. Most did not even worked without prior training. And Google does have now offline-speech recognition too. But true, the ability to trigger with any desired phrase is something still missing.

The ability to trigger with any desired phrase is easy, but not done for privacy reasons, to reduce the chance of it accidentally listening to irrelevant conversations.

The inability to change it from Hey google is done for marketing / usability reasons.

What was the software name, if I may ask? I remember speech recognition pre-CNN to be quite terrible.

There were always idiots writing buggy code. The issues you mention are about “old software” on “old hardware”. GP is only talking about “old style of software development”. Granted Qt, X, Win API is unnecessarily complicated.

Agree 100%.

I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).

Coming from more “designed” languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they’re finally even making their way into a lot of libraries.

I sense there’s a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust’s borrow checker, Swift’s reference counting, or C++ smart pointers.

I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are “hard” in a way that C#/Java/JS aren’t, and I also think it has a big effect on performance, or at least, latency. I completely agree we’ve backslid, and far, but the reality is, today, it’s expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we’re stuck with the Electron / web shitshow, in large part because it’s just faster, and easier for non-specialists to develop. It’s all driven by economic factors.

There is movement away from stop-the-world GC, but not to reference counting. The movement is towards better GC.

The language Go has sub millisecond GC with multi-GB heaps since 2018. See https://blog.golang.org/ismmkeynote

Java is also making good progress on low latency GC.

Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.

I don’t want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.

Yet we still read articles and threads about how bad the Go GC is and the tradeoffs that it forces upon you.

I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.

Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.

And if you don’t think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!

The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.

Now we have rust, which is great! But we need more.

I don’t think it’s fair to call garbage collection a mistake. Sure, it has properties that make it ill-suited for certain applications, but it is convenient and well suited for many others.

The Go GC isn’t that great, it’s true. It sacrifices huge amounts of throughput to get low latency: basically a marketing optimised collector.

The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it’ll go faster but something like ZGC isn’t a bad default.

GC is sufficiently powerful these days that it doesn’t make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That’s one reason web apps beat desktop apps to begin with – web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.

Go achieves those low pause times by allocating 2x memory to the heap than it’s actually using. There’s no free lunch with GC.

Same applies with manually memory management, you get instead slower allocators unless you replace the standard library with something else, and the joy of tracking down double frees and memory leaks.

I’m using Rust, so no double frees and no accidental forgetting to call free(). Of course you can still have memory leaks, but that’s true in GC languages too.

That is not manually memory management though, and it also comes with its own set of issues, like everyone that was tried to write GUIs or games in Rust is painfully aware of.

There is no free lunch no matter what one picks.

> Coming from more “designed” languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they’re finally even making their way into a lot of libraries.

I worked for a robotics company for a bit, writing C++14. I don’t remember ever having to use raw pointers. That combined with the functionality in Eigen made doing work very easy — until you hit a template error. In that case, you got 8 screens full of garbage.

> Yes, computers have gotten faster and memory and disks much larger. That doesn’t mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

With Moore’s law being dead, efficiency is going to get a lot more popular than it has been historically. I think we’re going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.

We see new languages like Nim and Crystal with their only value proposition over Python being that they’re more efficient.

Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.

I said that 20 years ago, but so far I’ve been proven completely wrong. Skype/etc just keep getting bigger and slower despite, from what I can tell adding absolutely no additional functionality. In fact if you consider it can’t seem to do peer to peer anymore, its lost features.

Very few companies are rewritting their electron apps in win32 (although they should be). Instead it continues moving in that direction, or worse. Crashplan rewrote their java GUI a while back in electron. Java UI’s are mostly garbage, but compared with the electron UI it was lightweight and functional. The electron UI (besides shipping busted libraries) has literally stripped everything out, and uses a completely nonsensical paradigm/icon set for the tree expand/file selection. Things like slack are a huge joke, as they struggle to keep it from dying under a load my 486 running mIRC could handle. So blame it on the graphics and animated gif’s people are posting in the chat windows, but the results speak for themselves.

Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

Though the only measurement I think people would actually care about is battery impact, and even that is pretty much hidden away on phones except to the few people who actually look.

But the other problem is: who cares if Discord or a browser’s HN tab aren’t optimally efficient? You’re just going to suck it up and use it. With this in mind, a lot of the native app discussion is technical superiority circlejerk.

Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

I’d say it’s more of a “without a way for end-users to compare” — the average user has no idea how much computing resources are necessary, so if they see their email client taking 15 seconds to load an email and using several GB of RAM, they won’t know any better; unless they have also used a different client that would do it instantly and use only a few MB of RAM.

Users complain all the time when apps are slow, and I think that’s the best point of comparison.

Even further: without a way for end users to take action based on that comparison.

If I decide that I don’t want to use Slack because it drains my battery, then I can’t take part in Slack conversations.

Because Slack is the go-to chat application for so many teams, excluding myself from those conversations is not feasible.

End result: I carry on using Slack.

There is an economic theory that’s escaping me right now, but the gist is that with certain goods, the market will hover at the very edge of efficiency; they have to become just scarce enough to break a certain threshold, then the market will realize that they are in fact a scarce resource, then correct to achieve a high efficiency equilibrium.

Instacart website has dreadfully slow search. Looks like instant search update takes forever to update with each character. The whole site is so slow. It makes my Mac Safari complain that the page uses significant resources.

This weekend I noticed that Amazon Fresh now delivers the same day—-for the past few months they had no slots. I switched to Amazon away from Instacart at once. The Amazon website lacks some bells and whistles compared to Instacart but it is completely speedy. If Instacart website were satisfactory I would never have switched.

Slow, bloated websites can absolutely cost companies money.

I think the other major, major thing people discount is the emergence of viable sandboxed installs/uninstalls, and the accompanying software distribution via app stores.

Windows 95 never had a proper, operating-system supported package manager, and I think that’s a big part of why web applications took off in the late 90s/early 2000s. There simply wasn’t any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Mobile has forced a big reset of this, largely driven by the need to run on a battery. You can’t get away with as much inefficiency when the device isn’t plugged into the wall.

There simply wasn’t any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Bloated, inefficient software is certainly present on the native side too, but it’s also possible to write single-binary “portable” ones that don’t require any installation — just download and run.

OS API sets have evolved toward more sandboxing. Things are more abstract. Fewer files on disk, more blob-store-like things. Fewer INI files in C:Windows, more preference stores. No registry keys strewn about. .NET strong naming rather than shoving random DLLs into memory via LoadLibraryA()

(Hi, I’m a windows dev)

> [the absence of a package manager was] a big part of why web applications took off in the late 90s/early 2000s.

Of course apt-get is very convenient but I can’t see a Microsoft version of it letting companies deliver multiple daily updates.

Based on my experience of the time the reasons were, in random order

– HTML GUIs were less functional but easier to code and good enough for most problems

– we could deploy many times per day for all our customers

– we could use Java on the backend and people didn’t have to install the JVM on their PCs

– it worked on Windows and Macs, palmtops (does anybody remember them?) and anything else

– it was very easy to make it access our internal database

– a single component inside the firewall generates the GUI and accesses the db instead of a frontend and a backend, which by the way is the modern approach (but it costs more and we didn’t have the extra functionality back then, js was little more than cosmetic)

> Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason.

I would predict that if only Qt didn’t cost a mind-boggling price for non-GPL apps. They should really switch to pay-as-you-earn e.g. like the Unreal engine so people would only have to start paying as they start earning serious money selling the actual app. If they don’t Qt popularity is hardly going to grow.

> Yes, computers have gotten faster and memory and disks much larger. That doesn’t mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

If we save developer-cycles, it’s not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.

> For example, IM, video/audio calls, and working with email shouldn’t take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible — simultaneously — with 256MB of RAM and a single-core 400MHz Pentium II.

Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don’t realize there are there. Some of them working in the background, some being so naturally that you can’t imagine them not being there since the beginning of everything.

> If we save developer-cycles, it’s not wasted, just saved somewhere else.

In other words, pass the buck to the user (the noble word is “externality”).

> As a long-time Win32 developer, my only answer to that question is “of course there is!”

As a long-time Linux user, that’s what I say as well.

And as a privacy activist, that’s what I routinely use.

> The efficiency difference between native and “modern” web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749

Is it practical to target wine as an application platform? That will require building without vs. Or build on windows and test with wine. What are the apis one would need to avoid in order to ensure wine compatibility?

This sentiment is why I’ve moved to write Elixir code professionally three years ago, and why I write Nim for all my personal projects now. I want to minimize bloat and squeeze out performance from these amazing machines we are spoiled with these days.

A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.

It’s our responsibility to minimize our carbon footprint.

Some of blame is to be put on modern development environments that pretty much require the latest best hardware to run smoothly.

> It’s our responsibility to minimize our carbon footprint.

This, a hundred times.

> only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions

A few kb for the binary + 20-40 gb for the OS with 25 years of backwards compatibility

If the browser is computationally expensive abstraction, so is the various .NET SDKs, the OS, custom compiler and the higher language of your choice. Yes there were days were an game like prince of persia could be fit in to the memory of apple IIe and all of it including the sound graphics, mechanics and the asset was less than 1.1 MB ! However the effort required to write such efficient code, hand optimise compiler output is considerable not to mention very few developers will be able to do it.

Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.

While the browser security sandboxes have its issues, I don’t want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .

Till late 2000’s Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family’s systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.

I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that

Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.

I use windows (although not a heavy user, I mainly use Linux these days), and only outsider apps I have installed are lightweight open source ones and some “” versions of software. You don’t need an antivirus apart from built-in Windows Defender. And I don’t notice any slowdown. I have a non-admin account which I regularly use and admin account is separate.

Arguably many users don’t know how to use a Windows desktop. But that’s not a failure of desktop; that’s failure of Windows. They could have provided an easy way to install applications to a sandbox. On Android you can install from apk files and they are installed to a sandbox. If Windows had such a feature easily available, I think most of genuine desktop app makers would have migrated to it. This would have the advantages of the browser and no battery drain, no fan noise, no sluggishness.

You already can use UWP that has sandbox. Win32 apps can be converted to it. So no one cares about more security. Most vendor stuck at “just work” Win32.

Native desktop apps are great.

The reason that people don’t write them is because users aren’t on “the desktop”. “The desktop” is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can’t run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there — Android users won’t like your iOS UI, and iOS users won’t like your Android UI. Then there are tablets.

Meanwhile, your web app may not be as good as native apps, but at least you don’t have to write it 6 times.

> Meanwhile, your web app may not be as good as native apps, but at least you don’t have to write it 6 times.

I must be living in a parallel world because I use a ton of desktop apps that aren’t “written 6 times” – and write a few, including a music & other things sequencer (https://ossia.io).

Just amongst the ones running on my desktop right now, Strawberry (Qt), Firefox (their own toolkit), QtCreator (Qt), Telegram Desktop (Qt), Bitwig Studio (Java), Kate (Qt), Ripcord (Qt), all work on all desktop platforms with a single codebase. I also often use Zim (GTK), which is also available on all platforms, occasionnally Krita (Qt) and GIMP (GTK), and somewhat rarely Blender. Not an HTML DOM in sight (except FF :-)).

In my experience Java GUIs are consistently even more laggy and unresponsive than Electron apps. They may be lighter in terms of memory, but they never feel lighter. Even IntelliJ and family – supposedly the state of the art in Java apps – feel like mud on a brand-new 16″ Macbook Pro.

Interestingly they seem to run exactly the same on horribly low spec machines. I blame the jvm’s love for boxing and unboxing everything in IL land. Of course by now I’d hope it’s less wasteful – last I spent serious time in Java was 2015.

I’ve definitely noticed the same on IntelliJ but weirdly enough Eclipse feels just fine. IIRC both are written in Java, so maybe it comes down to the design of IntelliJ moreso than the limitations of the JVM?

Eclipse interestingly uses native controls which it hooks up to Java, while IntelliJ draws everything essentially.

I used Eclipse for a while before switching to IntelliJ around ~2015 and it actually seemed like a vast improvement, not just in terms of features but in terms of performance. It still wasn’t “snappy”, but I figured I was doing heavy work so that was just how it was.

Fast-forward 5 years and I’ve been doing JS in VSCode for a while. My current company offered to pay for Webstorm so I gave it a try. Lo and behold it was still sludgy, but now unbearable to me because I’ve gotten used to VSCode.

The one other major Java app I’ve used is DBeaver, which has the same problem to an even greater degree. Luckily I don’t have to use it super often.

Lighter in terms of memory? No way. Intellij is always at a few gb per instance. They are indeed laggy as hell. With the latest Mac os intellij products specifically bring down the entire os for ten to twenty minutes at a time requiring a hard reboot without which the cycle starts again. Except it’s not java or intellij, it’s the os. I only wish they were electron apps. That way I wouldn’t have to return a $4400 brand new 16″ mbpro because of its constant crashing due to horrible native apps. All apps can be shitty. At least electron ones are cross platform, work, and generally do not bring the whole system to a standstill followed by a hard crash. While using about the same resources as electron apps.

QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there’s QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).

It’s rather clunky and often requires writing like C++ in whatever language of choice you’re using, the worst of both worlds.

I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.

I’d also love for Mac and Windows to make it really easy to get a vendor blessed version of QT installed.

Imagine if when trying to run a QT app on Windows a dialog box could popup saying. “Program X is missing the Y, install from the Windows Store (for free): Yes / No”

Thanks for the info, gonna check.. we already have a cert, no clue what’s missing precisely in the config, how can I try ?

Do you count Qt apps as native, but not count web apps as native? Why?

Qt may not ‘look’ native, but it has native performance, whereas Electron really doesn’t.

The difference between “Qt native” and “native native” (e.g. Win32 or Cocoa) is still noticeable if you pay attention, although it’s not quite as obvious as between Electron and the former.

(Likewise, applications using the JVM may also look very convincingly like native ones, but you will feel it as soon as you start interacting with them.)

Is it really even worth highlighting though? I use Telegram Desktop (Qt) daily and it is always, 100% of the time completely responsive. It launches basically instantly the second I click the icon and the UI never hangs or lags behind input to a noticeable degree. If we transitioned to a world where everyone was writing Qt instead of Electron apps we would already have a huge win.

This gets said a lot, and granted VSCode is certainly one of the best performing Electron apps, but it definitely is not indistinguishable from native apps. Sublime, Notepad++, or TextAdept all fly compared to VSCode in terms of performance and RAM efficiency.

On Mac, VSCode does a better job than many apps at emulating the Cocoa text input systems but, like every electron app, it misses some of the obscure corners of cocoa text input system that I use frequently.

If we’re going to use JavaScript to write native apps, I’d really like to see things like React Native take off: with a good set of components implemented, it would be a first class environment.

No. I like VS Code but it’s a hog.

I still use Macvim or even Sublime Text a lot for speed reasons, especially on large files.

If your native apps are indistinguishable from VSCode, they’re doing something wrong.

I use VS Code daily (because it seems to be the only full-featured editor that Just Works(TM) with WSL), but it can get pretty sluggish, especially with the Vim plugin.

Try to use AppleScript or Accessibility. It’s like VS Code doesn’t even exists.

If I recall correctly, Microsoft forked their own version of electron to make vs code feel more snappy. Because normal electron runs like slack.

Try opening a moderately large (even 2MB) .json file in VSCode, and then do the same in sublime.

VSCode very quickly freezes because it cannot handle a file that size. Sublime not only opens it but syntax highlights immediately.

This is something with your configuration. OOB VSCode will immediately show you the file but disable tokenization and certain other features. I regularly open JSON files upto 10 MB in size without any problem. You probably have plugins which impede this process.

Isn’t that more of an Electron issue?

I mean, is anyone clamouring for VS Code, for example, to be rewritten in native toolkits?

I would argue that the web platform is one of the most optimised and performant platforms for apps.

When you say web platform do you mean a browser? Using a browser is the mosted optimised and performant over installing an application on your desktop?

Curious what desktop do you run your browser under?

I would give you an example of a simple video split application. A web platform requires uploading, downloading and slow processing. A local app would be hours quicker as the data is local.

No reason a video splitting app couldn’t be written with client-side JS.

a few reasons :

– Qt is actually the native toolkit of multiple operating systems (Jolla for instance and KDE Plasma) – you just need to have a Linux kernel running and it handles the rest. It also does the effort of going to look for the user theme for widgets to mix in with the rest of the platform, while web apps completely disregard that.

– Windows has at least 4 different UI toolkits now which all render kinda differently (win32, winforms, wpf, the upcoming winui, whatever is using Office) – only Win32 is the native one in the original sense of the term (that is, rendering of some stuff was originally done in-kernel for more performance). So it does not really matter on that platform I believe. Mac sure is more consistent, but even then … most of the apps I use on a mac aren’t cocoa apps.

– The useful distinction for me (more than native and non-native) is, if you handle a mouse event, how many layers of deciphering and translation has it to go through, and are these layers in native code (eg. compiled to asm). As it reliably means that user interaction will have much less latency than if you have to go through interpreted code, GC, …

Of course you can make Qt look deliberately non-native if you want, but by default it tries its best – see https://code.woboq.org/qt5/qtbase/src/plugins/platforms/coco… and code such as https://code.woboq.org/qt5/qtbase/src/plugins/platforms/wind…

Knowing what I know about Qt and what I’ve done with it in my day job, it’s basically the best kept secret on hn. What they’re doing with 6+licensing… I’m not sure how I feel, but from a pure multi-platform framework it really is the bees knees.

I’ve taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything “just worked. I was impressed.

I just wish it weren’t stuck, anisotropically, ~10 years in the past. Maybe Qt6 will be better, but more likely it will be more and more QML.

This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Also worth noting that many creation-centric applications for the desktop (graphics, audio, video etc. etc.) don’t look “native” even when they actually are. In one case (Logic Pro, from Apple), the “platform leading app from the platform creator” doesn’t even look native!

On macOS Qt doesn’t really use Cocoa, it use Quartz/CoreGraphics (the drawing rather than the application layer). Note that Apple’s pro apps are native controls with a UI theme: they usually behave like their unthemed counterparts.

> This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Qt also supports rendering directly on the GPU (or with software rendering on the framebuffer) without any windowing system such as X11 or Wayland – that’s likely how it is most commonly used in the wild, as that’s one of the main way to use it on embedded devices.

I’d like to see it do that on macOS …

You’re seriously suggesting that the common use of Qt on Linux systems is direct rendering without the windowing system?

Well, yes. I can’t tell too much because of NDAs but if you go buy a recent car there is a good chance that all the screens are rendered with Qt on Linux or a RTOS – there is likely more of those than desktop linux users as much as this saddens me

Not parent, but yes, sort of.

Arguably it’s use in embedded contexts is much larger than desktop. It’s quite popular for in-car computers, defense systems, etc.

For desktop linux, yes, it uses the windowing system.

> Qt relies on a lower level windowing system

That’s true of QtWidgets, but not QML / Qt Quick (the newer tool), correct? (I found this hard to determine online).

What does “native” even mean?

Put a <button/> in an HTML page and you get a platform-provided UI widget.

A GUI is a collection of elements with specific look and behaviour. A Desktop Environment is a collection of GUI(s), tools and services. Native means you have something which follows this look and behaviour 100% and can utilize all the tools and services.

Implementing the look is simple, adding behaviour quite harder and utilizing the service the endgame. WebUI usually does nothing from those or some parts, it all depends on the constellation. But usually there is a obvious difference at some point where you realize whether something is native or just made an attempt.

It’s not platform-provided in my experience, but browser provided. The result of <button/> when viewed in a browser on macOS has no relation to the Cocoa API in any meaningful sense.

I’m pretty sure that when you render just a <button> in at least Safari, the browser will render a native Cocoa button control. If you set a CSS property like background colour or change the border, then it will “fall back” to a custom rendered control that isn’t from the OS UI.

I did a small bit of research into this, and found plenty of “anecdotal” evidence, but nothing confirming for sure. I’m looking and interacting with the controls and they seem pretty native – if they’re a recreation than that’s pretty impressive 🙂

I don’t think it makes sense to use it on even small laptops screen to be honest so I don’t really see the point. You’d have to redo the UI and whole paradigm entirely anyways for it to be meaningful on small devices. But there is certainly not any obstacle to porting – from my own experience with ossia & Qt, it is fairly easy to make iOS and Android builds, the difficulty is in finding a proper iOS and Android UX.

In particular C++ code works on every machine that can drive a screen without too much trouble – if the app is built in C++ you can at least make the code run on the device… just have to make something pretty out of it afterwards.

The point is that the parent poster mentioned tablets and phones which you don’t address in your point. Of course your examples aren’t written 6 times, but they support fewer platforms too (only desktop).

Off-topic, but regarding Bitwig: of course it makes perfect sense to use it on smaller devices. Not phones, but tablets. It’s even officially supported with a specific display profile in your user interface settings (obvious target amongst others: windows surface). This is particularly useful for musicians on stage.

Yes, today you can use JavaFx to build cross platform desktop apps.

I think he did not mean “written 6 times”, but more like Compiled 6 times, with 6 different sets of parameters, and having to be tested on 6 different devices.

CI/CD + uh… doing your job? I build one app (same codebase) on 4 different platforms often, it isn’t terribly hard.

I don’t know, I think they did mean that.

You write an app for the Mac… how do you ship on Windows as well?

Concerning the desktop, I honestly don’t see Windows users caring much about non-native UIs. Windows apps to this day are a hodgepodge of custom UIs. From driver utilities to everyday programs, there’s little an average Windows user would identify as a “Windows UI”. And even if, deviations are commonplace and accepted.

Linux of course doesn’t have any standard toolkit, just two dominant ones. There’s no real expectation of “looking native” here, either.

Which leaves macOS. And even there, the amount of users really caring about native UIs are a (loud and very present online) minority.

So really, on the Desktop, the only ones holding up true cross-platform UIs are a subset of Mac users.

During my days of Windows-exclusive computing, I wondered what people meant by native UIs, and why do they care about them. My wondering stopped when I discovered Mac OS and, to a lesser extent, Ubuntu (especially in the Unity days). Windows, with its lack of visual consistency, looked like a hot mess compared to the aforementioned platforms.

And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?

I don’t know exactly what time period you’re referring to, but back when Java was attempting to take over the desktop UI world with Swing, it was painfully obvious when an app wasn’t native on Windows. Eclipse was the first Java app I used that actually felt native, thanks to its use of native widgets (through a library called SWT) instead of Swing.

> And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?

I don’t think that’s how fraud works in actuality; malicious actors will pay more attention to UI consistency than non-malicious actors (who are just trying to write a useful program and not trying to sucker anyone), inverting that signal.

I don’t know, I’ve read that e.g. spam will not focus on grammatical accuracy because they want to exclude anyone who pays attention to details. Also most fake Windows UIs from malicious websites I used to see weren’t exact matches of the native UI.

I think this has changed. People used to be very particular about how their apps looked on different native platforms, like you say. But I don’t think it’s like that anymore. People are more agnostic now, when it comes to how user interfaces look, because they’ve seen it all. Especially on the web, where there’s really no rules, and where each new site and web app looks different. I believe this also carries over to native apps, and I think there’s much more leeway now, for a user interface to look different from the native style, as long as it adheres to any of the general well established abstract principles for how user interface elements ought to behave.

Speaking for myself only, I haven’t changed my preference, I’ve just given up hoping that any company gives a shit about my preference.

And therefore, beware any OS attempts to break cross platform browser compatibility.

Also I think you can deploy to all those things with Qt.

And pay QT like 5,000 $ a year to keep it closed source. No thank you. Would rather write it 6 times. Or just use electron.

Qt is LGPL licensed is it not? LGPL license means you can distribute your app closed source, so long as the user can swap out the Qt implementation. This usually just means dynamic linking against Qt so the user can swap the DLL. The rest of your app can be kept closed source.

On iOS and Android the situation might be a bit more complicated, but this discussion[0] seems to say that dynamically linking would also work there.

[0]: https://wiki.qt.io/Licensing-talk-about-mobile-platforms

Qt is mostly LGPL. It’s really not that hard to comply with that on desktop, and doesn’t require you opening your source code.

It’s hard to satisfy the requirements in various app stores. They also sneak in more restrictive licenses in their graphs

Qt doesn’t require that but even if it did writing it 6 times is vastly more expensive. People would just rather spend 500k on writing it 6 times than 5k on a license because they are somehow offended at the notion of paying for dev software or tooling.

It’s a major reason UI coding sucks. There is no incentive for anyone to make it not suck, and the work required to build a modern UI library and tooling is far beyond what hobbyist or spare time coders could ever attempt.

AFAIK you only have to pay if you modify Qt itself and don’t want to release those changes.

The other thing is that I trust the web browser sandbox. If I have to install something I’m a lot more paranoid about who wrote and whether it’s riddled with viruses.

Or you could use languages that allow you to share code, so you have 6 thin native UI layers on top of a shared cross-platform core with all the business logic and most interactions with the external world.

You can do it today with C# and https://www.mvvmcross.com/

And many of those apps end up with terrible performance. I’m sure it’s possible to write a performant electron app, but I don’t see it happen often and it’s disappointing.

One that can read and write files and directories among other things. (Not an electron fan, but web pages are still pages, not real apps.)

I started using Linux in the late 90s and have since lost all expectation of someone writing an app for it.

actually id say app support is better than ever (of course all the caveats that go along with being a 1% os apply..)

If wine and steam count as app support, then you’re not wrong. It’s pretty amazing what can run on linux nowadays compared to yesteryear.

Targeting Windows alone gets you 90% of the desktop market. 95% if you make it run reasonably in Wine. This argument is often used, but it’s an excuse.

Anything that you need to run on a desktop can’t be used effectively on a touch screen anyway, so phones and tablets don’t really count for serious software. (Writing this comment is stretching the bounds of what I can reasonably do on an IPhone).

95% of a market that has shrunk nearly 50% over the last decade.

In many ways, the consumer and non specialty business are post desktop. Turns out documents, email, and other communication apps cover 90% of use cases. Anything that requires major performance gets rendered in a cloud and delivered by these other apps.

They’re not refuting that. They agreed that it’s “95% of the market.” Their point is that the overall desktop has shrunk, regardless of Windows’s share of that.

Windows-app-compiled-for-Mac is going to annoy Mac users

And they’ll let you know it, too. Unfortunately this has been an issue since the first Macs left the assembly line in 1984. If you point out that based on their share of the software market they’re lucky they get anything at all, the conversation usually goes south from there.

I will come at this from a different, philosophical perspective:

Web apps come from a tradition of engaging the user. This means (first order) to keep people using the app, often with user-hostile strategies: distraction, introducing friction, etc.

Native desktop apps come from a tradition of empowering the user. This means enabling the user to accomplish something faster, or with much higher quality. If your app distracts you or slows you down, it sucks. “Bicycle for the mind:” the bicycle is a pure tool of the rider.

The big idea of desktop apps – heck, of user operating systems at all – is that users can bring their knowledge from one app to another. But web apps don’t participate in this ecosystem: they erode it. I try a basic task (say, Undo), and it doesn’t work, because web apps are bad at Undo, and so I am less likely to try Undo again in any app.

A missing piece is a force establishing and evolving UI conventions. It is absurd that my desktop feels mostly like it did in 1984. Apple is trying new stuff, but focusing on iPad (e.g. cursors); we’ll have to see if they’re right about it.

What a perfect HN reply. Webapp bad. Native good. No justification. Just a bunch of generalizations.

Gmail empowers me. Wikipedia empowers me. Github empowers me.

Of course native application are important. You don’t need to rely on those moralistic justifications.

You may not be aware of this, but the person you replied to has worked for years on a native UI toolkit. And they provide justification, too: skills don’t transfer between websites as readily as they do between apps. And while I wouldn’t associate we applications are somehow morally inferior, the fact is that many of today’s issues with increasing friction to drive engagement originated on the web and are easy to perpetuate on the web.

hahahaha. It’s nice to see there’s still reasonable people here. Not all of us have the mental fortitude to “empower” our lives.

I’m going to be slammed for using these two words, but for any real work you need to have as few layers of indirection between the user and the machine as possible, and this includes the UX, in the sense that it is tailored to the fastest and most comfortable data entry and process monitoring.

I don’t see any `web first` or Electron solution replacing Reaper or Blender in a foreseeable future. One exception I’m intrigued with is VS Code, which seems to be widely popular. May be I need to try it to form my own opinion.

As an Electron hater, I’m constantly surprised at just how much VS Code doesn’t suck.

My personal evolution has gone from Sublime Text 3 to Atom to VS Code to Sublime Text 3. I’ve never been a heavy plugin user, mainly sticking to code highlighting. The thing I really like is speed. Sublime Text rarely chokes on me. I love being able to type `cat some_one_gigabyte_file | subl` and getting it to open up with little difficulty. VS Code chokes on files of non-trivial size, and that was the thing I liked about it the least.

For anyone wondering why I’d open up a 1 GB file in a text editor, I guess the answer is largely because it’s convenient. Big log file? No problem. Huge CSV? No problem. Complete list of AWS pricing for every product in every region stored as JSON? No problem.

I’m an atom user, and it chokes for the very same reason.

However I also use vim for tweaking server side stuff, and use less by default whenever I want to read something (logs is an obvious one)…

This is both for speed but also UX, i believe vim style navigation (which less basically gives you), is great for reading and searching – what I cannot stand though is doing more than small edits in vim, for development (and i mean code is flying around like crazy stage of development, not read for 1 hour and make tweaks), then I am fastest with the kind of flexibility atom provides.

I know it can be tempting to have one tool for everything when it seems like the tools are supposed to be doing the same thing, but in my mind lean text editors tend not to compete with the big fat slow electron style editors – so just use them both, for their respective strengths.

I hear you on the value a text editor on the server side. I have picked up the basics of vi for this use case, but I ain’t fast with it 🙂

Over time I’ve tried quite a few popular text editors, Notepad, Emacs, Vim, UltraEdit, Sublime Text, and of course VSCode.

VSCode is surprisingly good for a Microsoft product and they had to do some crazy smart engineering work to make it not sucks while being built on top of Electron.

That said, it still quite slow and memory hungry, I’ve gone back to Sublime 3 a few weeks ago and I am not coming back to VSCode.

>> VS Code chokes on files of non-trivial size

VS Code isn’t really designed as a general purpose text editor. It’s meant as a development environment.

If MS choose to optimise the experience of 99% of the use cases (i.e. editing source code, which should never even approach 1GB), then that’s the correct call IMO.

>> For anyone wondering why I’d open up a 1 GB file in a text editor, I guess the answer is largely because it’s convenient.

I can completely appreciate the use of a text editor to open a massive log file, etc, I just don’t think that’s something VS Code is designed for. You can always use Sublime or Atom to open those files; while getting the nicer (IMO) dev experience with VS Code.

Vim handles large csv files pretty good, as well.

Not saying you should use it, just something that I found out years ago.

I’ve tried pure text editors, but they haven’t really grabbed me, and my fingers are quite clumsy.

I don’t open big files at all. There’s no point. Files exist as data made to be transformed from one form to another. It is only worth looking at a file in its final form unless you are making some kind of edit.

And even then, I make edits on large files through a series of commands, never opening the file.

By thinking of files in this way, it becomes easy to create programmable tool chains for manipulation.

What’s your typical strategy for parsing something like a large text file for some relevant data?

Suppose you could cat and grep, but what if you don’t know what you’re looking for?

Of course, if they did there would be no point in making my comment, it would be redundant.

I think the issue with your original comment is with “there’s no point,” which is in effect invalidating the opinion of others.

The reason for this, I think, is developers of VS Code really understand the problem field in this case, as they are making the tool for themselves.

Also, there’s probably a ton of resources involved.

For comparison, I just opened up an org-mode file in Aquamacs (Emacs with macOS native GUI) and it weighs in at 105MB, which is actually lower than I would have guessed.

I am running Emacs on macOS, and it only takes 43 MB barebones, and my regular setup takes about 81 MB. Both opening the same org-mode file. Point being that it really depends on what you choose to run on Emacs, and it does not have to be about 100 MB.

Just to add to that, I don’t think anyone should be concerned about their text editor taking 200 MB anymore. I doubt it is worth worrying about.

I love vs code, and its plugin system, but if I’m on my laptop without a charger, I use something else. When I’m running vs code, my battery life is cut nearly in half.

What do you use instead? In my experience, normal IDEs (Android Studio, Xcode, Visual Studio) all perform worse than VS Code in terms of memory use and battery. :/

Qt Creator or if my battery is real low, a text editor like Gedit (yes, I’m probably saving more there just by not having features like code completion and syntax checking).

Anything is capable of not sucking if you go out of your way to spend several man-decades optimizing it.

It doesn’t suck, but try Jetbrains IDEs. C++ and other “not JS” language support is vastly superior including refactoring that actually works.

Jetbrains IDEs (and others for that matter) provide so much more than the glorified text-editors, including extensive debugging support (both in my own code, library code, platform code), code-autocompletion, code navigation, code-formatting, refactoring, linting, static-analysis (works well for Python as well), great syntax highlighting, spell-check, a good plugin ecosystem. I’d never go back to editing code without that support.

The Jetbrains ecosystem is real cheap too, I pay US$160/year for the personal-license all-product suite I can install and use anywhere … and I use PyCharm (Python), IntelliJ IDEA (Java et al), and Datagrip (DB) extensively, dipping into CLion (C++/Rust) as well … but they have IDEs for many other languages and ecosystems as well. It’s definitely a good deal.

Jetbrains suite, along with Docker, Atlassian SourceTree, and Homebrew (and connection to AWS/Kubernetes) are my main tools these days.

I keep periodically re-trying VSCode, but holy cow. It’s a massive step down from a Jetbrains IDE, in every single language I’ve dev’d in.

Jetbrains stuff works, VSCode mostly handles the basics if it’s possible to configure it correctly. Which is quite the achievement, and it’s a very reasonable option and far better than much that came before it. But it’s not where I want to spend my time if I can avoid it.

You’re working with statically typed compiled languages tho. Once you try using a dynamic language you realize another editor is enough IMO. I use emacs for anything dynamically typed (including compiled languages like elixir) and intellij for scala/java.

It’s miles better on Python and most javascript that I’ve touched (VSCode’s ecosystem does tend to have more breadth, and if you’re working on something that VSCode has plugins for but Intellij does not, yea – VSCode can be noticeably better for most purposes). Most commonly around stuff that requires better understanding of the structure of the language / project, like refactoring and accurately finding usages.

But yes, for many dynamic languages a fat IDE is less beneficial, especially for small-ish projects (anything where you can really “know” the whole system).

Webstorm or PyCharm isn’t as good as IntelliJ or ReSharper, but holy hell is it better than just a text editor, even emacs.

I really don’t understand why so many programmers proudly proclaim that they do things the hard way and wear that as a badge of honor.

Check out clangd or ccls for VSCode. They have refactoring that actually works (probably still rather basic when compared to Jetbrains IDEs).

jetbrains stuff is written in java btw, for all languages. (when we talk about “native”/“not native”)

Same here. I am all native guy, including back end C++ servers but VS code is very decent. But then again I think the level of developers who did the main task is way above the average. And MS itself wrote that they worked super hard to optimize it for memory and performance. Something that regular developer usually ignores / does not really know how to do due to their lack of understanding how lower level tech works.

Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance. In addition to that they’ve written their own view layer for an IDE in modern JS which makes it more performant and stable.

I despise electron and html-wrapper apps. But I gotta give credit where it’s due. VS Code is pretty good.

With the advent of the new WinUI, React+Native on Windows, and Blazer. I’m betting the future of windows is more web based technologies mingled with low-level native libraries.

> Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance.

Atom has some internal data structures written in C++. VSCode uses a native executable to do the file search, but no further low level magic is used to make it go fast.

I don’t think any of them are using WebAssembly yet.

I guess this is a semantics thing, but WinUI/React Native is not “web based”. It is JavaScript building “native UIs”.

Figma is pretty much replacing all web design applications precisely because it’s leveraging web tech for collaboration on a single document at the same time.

I think Figma’s success is less about being web first and more to do with filling in gaps in what Sketch offered, especially in collaboration. Today you need to buy at least 2 apps, Sketch and Abstract, to match the feature set of Figma.

Design is one of the areas where one could arguably create a native app, largely because the user base is much more homogenous in OS than most other user bases.

I think we can safely exclude ‘collaborative web design’ applications from the set of hardcore tools not gaining much from being implemented as web apps for understandable reasons.

Huh? Why? Designers had previously been using native apps for ages — Photoshop, Illustrator, Sketch… Figma has been successful not just because it’s collaborative but also because it’s performant, powerful, and reliable. Not sure why you think that can’t be achieved with other kinds of software

And yet it is a hardcore tool, and does gain a lot from being implemented as a web app. Another example might serve your point better.

Ok, I see my point being not so understandable, after all. I didn’t mean Figma isn’t a hardcore tool, I meant that we exclude it because it specifically concerns with web technology (being a tool for web design) and leverages web to implement collaborative usage. So it’s probably logical for it to be a web app.

You’re still way off. It’s not a “web” design tool; it’s a visual design and prototyping tool. Folks are designing a lot more than websites and web apps with Figma.

> So it’s probably logical for it to be a web app.

Ah, gotcha. Although I’d have assumed that Figma is used more to design native apps that web apps, this helps me understand where you’re coming from.

No, it’s not an ad…I posted in defense of the power that native desktop apps bring since the use case of Figma was being stretched into territories other than web/mobile UI design…immersive content/interaction creation is that category!

VSCode, because of electron, doesn’t allow you to have multiple windows that share state while working on a project. This makes it terrible with multiple screens.

It’s not because of electron. They could have multiple windows but it would be a massive over haul of the architecture . So they say just use another instance

I would argue that it is, because Electron doesn’t allow you to share a js context across windows. So while it is not impossible, it is much more involved than it would be in most other frameworks. In fact, this is my only gripe with Electron where I think the normal HN objections about performance, bloat and lack of native UI elements are overstated and not something that bothers me.

VS Code is slow at basic things like having characters show up on screen after hitting the key. It’s good at everything else though so that lag doesn’t matter as much.

On a real system or just in some syntetic benchmark? Because for me it looks quite fast in barfing out characters.

At least in base-mode. It can becoming slower when the IDE-features kicks in and autocomplete needs some time to meditate about the stae of it’s world. But this also scales with size of your active sourcecode, the codebase and used language.

Also, this is some problem of all IDEs, not exclusive with VS Code.

It probably depends on your hardware. I have an “older” (several years at least) Windows work laptop with iGPU that is quite sluggish with VS Code when hooked up to an external 4K display. However, it’s snappy compared to Microsoft Teams in the same situation.

Meanwhile, my similar era MacBook with dGPU hooked up to the same screen is very snappy and I honestly would probably not be able to tell the difference in a blind test between VS Code and typing in a native text box (like the one here in Safari).

I’d consider myself pretty anal about latency — I was never able to deal with Atom’s, for example (disclaimer: I haven’t tried it in years). I even dumped Wayland for X11 when I had a Linux desktop because of latency (triple buffering or something?) I couldn’t get rid of.

But VS Code is not bad.

That lag matters to me, but in my experience it’s no worse than, say, VIM with the number of plugins that I normally run. Fully bare-bones I imagine vscode is performant as well.

Usually professionals using text editors for their work are not concerned with the absolute keystroke-to-screen latency. It’s totally fine if it’s fast enough, and it is.

“Professional level” covers a lot of different uses… Juicero and Airbus both use CAD, I seriously doubt the latter are going to replace CATIA with OnShape

I’m in agreement with @namdnay…I don’t think it will replace enterprise packages like Siemens NX or CATIA anytime soon. I do all of my CAD design in Solidworks and I do like the out-of-the-box thinking/features of OnShape…but…their pricing model ends up being more expensive in the long run ($2100/year for Pro Version). I paid $4k for Solidworks in 2016 and it’s paid itself off more than 10x times since then…all without the need for a forced upgrade! When a newer version substantiates its value for my workflow, I will upgrade. Not to mention, most of my work can easily be done in Solidworks 2008-2010 because that is the innate nature of CAD packages, regardless of the version, they will get the job done.

So is Autodesk Fusion. But you won’t see Autocad stop selling their desktop software. People don’t buy extreme rigs to use for production, and then trade even 5% of the perf for convenience.

What’s your use-case?

I work for an indirect competitor backed by the same commercial geometry kernel (Parasolid) and it did not do well with our models (which, granted, are pretty different from typical mechanical CAD models).

It’s quite simple to use the web view process for nothing but the actual UI, and to move any intensive logic to a separate process (or even native code). It’s also very possible to make that UI code quite performant (this takes more work, but VSCode has shown that it’s possible).

If you don’t see web app replacing Blender, give a try to OnShape. I was so surprised by it. It is slower than comparable desktop app, but it is usable for real world projects.

> It is slower than comparable desktop app, but it is usable for real world projects

Which means that under a high enough load it will be unusable, while Blender will deal with it just fine.

I don’t find it that crazy, if properly compiled with web assembly. The thing is that Blender’s UI is all synchronous python, so, yeah, that and the addons system would be to need rewritten. Python in the browser is a no-go performance-wise, of course.

The main point though, is that running Python in the browser it’s an unnecessary abstraction because you get a crappier version of something that runs pretty well natively. If you’re starting from scratch, I think that the browser might be close to native performance in some tasks. Porting existing applications is a pain when you start looking into the details.

The problem is not so much the run-time performance of the code, it’s the overhead of loading the Python run-time environment over the network the first time you open the page.

VSCode uses the same c++ based regex parser for syntax highlighting as TextMate, Sublime, Atom, etc