JavaScript, low-level or AI?

The tension between generative AI and low-level languages

The software industry is going to be fun to see in the coming 5-10 years.

I see an interesting tension happening right now…

The generative AI side

On one side we have generative AI capturing almost every bit of the software development lifecycle and effectively becoming a new high-level abstraction, possibly the highest level we have ever seen in our industry.

There seems to be no stopping this phenomenon… just looking at the latest announcements from GitHub Universe, it’s clear that we will have to adopt AI in one way or another and this is not necessarily a bad thing if it truly makes us all more productive and focused on generating business value.

If you don’t know what I am talking about you should check out this video:

And if that’s not enough you should watch the full GitHub Universe 2023 keynote.

GitHub Universe 2023, Day 1, Thomas Dohmke on stage with a slide in the background saying "One more thing", mimicking the Steve Job's launch of the iPad

GitHub Universe 2023, Day 1. Thomas Dohmke on stage with a slide in the background saying “One more thing”, mimicking the Steve Job’s launch of the iPad

The part that impressed me the most is that now Copilot can also help you lay out the structure of a project, breaking down requirements into potential tasks. Once you are happy with the result, it can start to work on the individual tasks and submit PRs. This is called Copilot workspace, if you want to have a look.

It seems too good to be true and it’s probably going to be far from perfect for a while, but there’s great potential for efficiency here, and I am sure GitHub (and other competitors) will keep investing in this kind of products and maybe in a few years, we’ll be mostly reviewing and merge AI-generated PRs for the most common use cases.

If you think that ChatGPT was launched slightly less than 1 year ago, what will we be seeing in 5 or 10 years from now?

The low-level side

On the other side, we have a wave of new low-level languages such as Go, Rust, Zig (and Carbon, and Nim, and Odin, and VLang, and Pony, and Hare, and Crystal, and Julia, and Mojo, and.. I could keep going here… 🤷‍♀️).

The lovely Hare language mascot

OK, I really wanted to put the lovely Hare language mascot here. Hare is a systems programming language designed to be simple, stable, and robust. Hare uses a static type system, manual memory management, and a minimal runtime. It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high-performance tasks.

All these languages take slightly different trade-offs, but, at the end of the day, they are built on the premise that we need to go lower level and have more fine-grained control over how we use memory, CPU, GPU and all the other resources available on the hardware. This is perceived as an important step to achieve better performance, lower production costs, and reach the dream of “greener” computing.

If you are curious to know why we should be caring about green computing, let’s just have a quick look at this report: Data Centres Metered Electricity Consumption 2022 (Republic of Ireland).

The report findings state that in 2022, in Ireland alone, data centers’ energy consumption increased by 31%. This increase amounts to an additional 4,016 Gigawatt/hours. To put that in perspective we are talking about the equivalent of an additional 401.600.000.000 (402 billion!) LED light bulbs being lit every single hour. If you divide that by the population of the Republic of Ireland this is like every individual is powering ~80.000 additional LED light bulbs in their home, all day and night! And this is just the increase from 2021 to 2022… How friggin’ crazy is that?! 🤯

Light! More light! - a picture by D A V I D S O N L U N A with tons of light bulbs hanging from the ceiling

Photo by D A V I D S O N L U N A on Unsplash

Ok, now one could argue that we had low-level languages pretty much since the software industry was invented. So why isn’t that the default and why do we bother wasting energy with higher-level programming languages?

That’s actually quite simple: because coding in low-level programming languages such as C and C++ is hard! Like really really hard! And it’s also time-consuming and therefore expensive for companies! And I am not even going to mention the risk of security issues that come with these languages.

So why should this new wave of low-level programming language change things?

Well, my answer is that they are trying to make low-level programming more accessible and safe. They are trying to create paradigms that could be friendly enough to be used for general computing problems (not just low-level), which could potentially bring the benefits of performance and efficiency even in areas where historically we have been using higher-level languages and made the hard tradeoff of fast development times vs sub-optimal performance.

Take for example Rust. It was historically born to solve some of the hard problems that Mozilla had to face while building Firefox. But now it’s being used in many other areas, including embedded systems, game development, and even web development. Not just on the backend, but even on the frontend using WebAssembly!

I am not going to claim that writing stuff in Rust is easier than doing the same in JavaScript or Python, but it’s definitely easier than doing the same in C or C++.

So there might be many cases where we will be able to use these new languages to achieve better performance and efficiency without having to pay a massive development price for using a low-level language.

And I would go as far as saying that these use cases exist in the industry today and there’s a staggering lack of talent in these areas.

Why the tension?

So, is there really a tension here between generative AI-driven development and using low-level languages or are these just two very disjoint things?

I would personally say yes, there’s a tension.

Again, generative AI is pushing us to care less about the details. We trade our time and attention for the ability to focus on the business value and let the AI do the rest. This is a trend that has been going on for a while now and it’s not going to stop anytime soon.

Investing in using a low-level language goes in the opposite direction. It’s a bet that we can achieve better performance and efficiency by going lower level and deciding to be explicit about the minutia of how we want to use the hardware at best.

But, wait… Am I saying that AI is not going to be able to write efficient and hyper-optimised low-level code? 🤔

Maybe! Or, at least my belief is that, as with any abstraction, there’s always a price to pay. And the price of using AI is that we are going to be less explicit about the details and therefore we are going to be less efficient.

But I also expect this equation to change with time. As AI improves, it might be able to generate more efficient code. Possibly even better than code we would write manually, even with tons of expertise on our side.

What can we do as software developers

Where does that leave us?

As individual software engineers, we can’t expect to be able to change these trends. We can only try to understand them and adapt.

Investing in learning a new language is a multi-year effort, and although it might be fun (if you are a language nerd like me), it is time that you might be taking away from other activities that might be more rewarding in the long term or just more valuable to you. For instance, you could be learning more about generative AI, right? 🤓

My personal bet is to invest in both! I am currently learning Rust and I am also trying to keep up with the latest developments in the AI space.

For instance, Eoin and I just released a new episode of AWS Bites where we explore Bedrock, AWS generative AI service… Check it out if you are curious to find out what we built with it!

I am not sure how much I will be able to keep up with both, but I am going to try my best.

I tend to be a generalist and it’s only natural for me to try to explore a wide space of possibilities rather than going super deep on one specific topic.

But I am also aware that this is not the best strategy for everyone. So, if you are a specialist, you might want to focus on one of these two areas and try to become an expert in that. It might come with a risk, but it might also come with a great reward.

I am also of the belief that the more we learn the more we are capable of learning. So regardless if you decide to go wide or if you put all your eggs in one basket, the important thing is to always keep learning and keep an open mind.

If the future takes an unprecedented turn and we all end up writing code in a new language that is generated by AI, I am sure that the skills we have acquired in the past will still be valuable and will help us to adapt to the new paradigm.

Only the future will tell… And maybe, even after all this fuss, we’ll still be writing tons of JavaScript in 10 years from now! 😜

What do you think?

So what’s your opinion and what’s your strategy for the future? I’d love for you to strongly disagree with me… or not?! Either way, let me know what you think here in the comments or on X, formerly Twitter.

See you around and happy coding! 🤓

Sharing is caring!

If you got value from this article, please consider sharing it with your friends and colleagues.

Found a typo or something that can be improved?

In the spirit of Open Source, you can contribute to this article by submitting a PR on GitHub.

You might also like

Cover picture for a blog post titled Emerging JavaScript pattern: multiple return values

Emerging JavaScript pattern: multiple return values

This article explores how to simulate multiple return values in JavaScript using arrays and objects. It covers use cases like React Hooks and async/await error handling. The pattern enables elegant APIs but has performance implications.

Calendar Icon

Cover picture for a blog post titled Migrating from Gatsby to Astro

Migrating from Gatsby to Astro

This article discuss the reason why I wanted to migrate this blog from Gatsby to Astro and the process I followed to do it. Plus a bunch of interesting and quirky bugs that I had to troubleshoot and fix along the way.

Calendar Icon

Cover picture for a blog post titled Why you should consider Rust for your Lambdas

Why you should consider Rust for your Lambdas

Rust is an ideal language for writing AWS Lambda functions. Its performance can reduce execution time and memory usage, lowering costs. Its safety features like no nulls and error handling can reduce bugs.

Calendar Icon

Cover picture for a blog post titled Building x86 Rust containers from Mac Silicon

Building x86 Rust containers from Mac Silicon

This article walks through the challenges of cross-compiling a Rust web app from a Mac Silicon machine to an x86 Docker container using musl, RusTLS, multi-stage builds and other techniques to produce a small container image.

Calendar Icon

Cover picture for a blog post titled Invite-only microsites with Next.js and AirTable

Invite-only microsites with Next.js and AirTable

Learn how to create a private, invite-only website using Next.js, AirTable, custom React hooks, and Vercel deploy. The post covers backend APIs in Next.js, data storage with AirTable, validating access with invite codes, collecting user input, and deploying the final app.

Calendar Icon

Cover picture for a blog post titled Create resources conditionally with CDK

Create resources conditionally with CDK

This post explains how to conditionally create resources in AWS CDK using CfnCondition. It provides a practical example of creating an S3 bucket based on an SSM parameter value. The post covers defining a condition, attaching it to a low-level CDK construct, and importing the conditionally created resource.

Calendar Icon