I'm really torn -- you and your engineers should be excited to work on your codebase. You should enter it and be like "yes, I've made good choices and this is a codebase I appreciate, and it has promise." If you have a set of storylines that make this migration appropriate, and its still early in the company that you can even do this in 3 days, then by all means, do it! And good luck. It'll never be cheaper to do it, and you are going to be "wearing" it for your company's lifetime.
But a part of me is reading this and thinking "friend... if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?" Like, you have the counterexample right there! Other companies are making the "technically worse" choice but making it work.
I love coding and I recognize that human beings are made of narratives, but this feels like 3 days you could have spent on customer needs or feature dev or marketing, and instead you rolled around in the code mud for a bit. It's fine to do that every now and then, and if this was a more radical jump (e.g. a BEAM language like Elixir or Gleam, or hell, even Golang, which has that preemptive scheduler + fast compiles/binary deploys + designed around a type system...) than I'd buy it more. And I'm not in your shoes so it's easy to armchair quarterback. But it smells a bit like getting in your head on technical narratives that are more fun to apply your creativity to, instead of the ones your company really needs.
The author addresses that in the article. Python can scale but then developers would have to work with unintuitive async code. You can think of it as a form of tech debt - every single decision they make will take longer because they have to learn something new and double check if they're doing it the right way.
> The author addresses that in the article. Python can scale but then developers would have to work with unintuitive async code
Python didn't cause their problems, Django did. They wanted async, but chose a framework that doesn't really support it. And they weren't even running it on an async app server.
Python didn't work for them because every subsequent choice they made was wrong.
I think you're saying the same thing that I am. Python didn't work for them because they didn't use it correct and so accelerated the amount of tech debt they created. Posthog is using Django and they've scaled so clearly they've figured something out with using Python/Django with async but it probably isn't intuitive because neither you nor the author know of a good way to support it.
I was just thinking... "BugHog? The platform famously broken more often than not?"
We have a whole posthog interface layer to mask over their constant outages and slowness. (Why don't we ditch them entirely? I, too, often ask this, but the marketing people love it)
>if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?
Also, considering the project is an AI framework, do you think the language ChatGPT is built on is a worse choice than the language we use because it's in the browser?
I have to spend 3 days working on someone else's "narratives that are more fun to apply their creativity to" all the time, even when my intuition and experience tells me it isn't a good idea. Sometimes my intuition is wrong. I've yet to meet a product manager that isn't doing this even when they claim to have all the data in the world to support their narrative.
Personally I don't think there's anything wrong with scratching that itch, especially if its going to make you/your team more comfortable long term. 3 days is probably not make-or-break.
Async and Django don't mix well and I honestly see the whole Django Async as wasted resources, all those "a" prefixed functions etc.
To be honest, I never liked the way async is done in python at all.
However, I love Django and Python in general. When I need "async" in a http cycle flow, I use celery and run it in background.
If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side. Either it's Chat response with LLM or importing a huge CSV file.
Simple rule for me is, "don't waste HTTP time, process quick and return quick".
> If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side.
Django should just not be used period. Fast API + Uvicorn is all you need these days. It does the async for you.
With LLMS, you shit out working production ready web apps in 2 days now that are quite performant, as long as you don't care about code maintainability long term.
Folks, if you have problems doing async work, and most of your intense logic/algorithms is a network hop away (LLMs, etc.), do yourself a favor and write a spike in Elixir. Just give it a shot.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
I did an interview for the job I'm currently at, and we were discussing in it an architecture for live updating chats and I said I wouldn't reinvent the wheel and just use the approach Phoenix LiveView uses, which is to have a basic framework loaded client-side that would just apply diffs that comes from a websocket to the UI and have the chat update using those diffs. Turns out this is exactly the architecture they use in production.
People are reimplementing things that are first class citizens in elixir. Live content update, job runners, queues... Everything is built into the language. Sure you can do it all in typescript, but by then you'll be importing lots of libraries, reimplementing stuff with less reliability and offloading things like queues to third party solutions like pulsar or kafka.
People really should try elixir. I think the initial investment to train your workforce pays itself really quick when you don't have to debug your own schedulers and integrations with third party solutions. Plus it makes it really easy to scale after you have a working solution in elixir.
I agree in principle but I think that your average Python developer that thinks that Node.js is an improvement over Python is going to have seizures if they need to switch to Elixer. It's a completely different way of working.
I don't know... I'm your average python dev. I don't think nodejs is necessarily an improvement, but when I got to pick up a bit of elixir, and after struggling a bit with the many collection types and the pattern matching, when it clicked it was really eye-opening. So I don't think this is out of the league of the regular dev. I think if we were talking about Haskell that would probably apply, but elixir is fine. Even metaprogramming is very intuitive in elixir once you get the hang of it. It's just a very well designed language.
Indeed it is, and congratulations to making it to the other side of the ascent.
It's interesting, for some people Elixir really clicks, others can't make heads or tails of it. I don't mind Erlang either, but I understand that that is really an acquired taste.
Still, there is a long way for me to actually be productive with elixir. Sure I can now solve some advent of code challenges with it, but I still haven't done a proper project with Liveview and OTP. I've seen enough though to have me convinced this is the way.
I have always been interested in Elixir but have been putting off learning it because I don't see many job opportunities for it(at least here in Asia).
But your comment has convinced me to try it since I am having a bit of NextJS burnout.
Sure Kafka is used for much more advanced applications. In autonomous microsservices if I'm not mistaken these topics could be even used as the source of truth so that each specialized database could be reconstructed by replay. I'm saying that for simple topics that are just used to coordinate job queues, elixir can handle it just fine.
Using a lot of Typescript and Python in my current role and I find myself missing that part of Elixir. Ecosystems are night and day though. For what we're doing we'd have to write far too many libraries ourselves in Elixir and don't have the time right now.
This is an absolutely horrible idea. I’m not questioning the technology choice. But as someone interested in their career, it makes no sense to focus on a language or technology that is not popular. It’s both bad from the recruiting side trying to get developers who are smart enough to care about their n+1 job and the developer side.
There are probably less code samples and let’s be honest this is 2025, how well do LLMs generate code for obscure languages where the training data is more sparse?
Maybe for you! That's your call. I'm also interested in my career.
I've had 3 Elixir jobs and 2 Rust jobs in the last 10 years. All were on real products, not vaporware. I learned a ton, worked with great people, and made real friends doing it.
Luck? Skill? Who knows. It's not impossible to work with the technology of your choice on problems you find interesting if you're a little intentional.
Nothing ever gets better if everybody just does what's already popular.
I work at a company doing full-stack Elixir with most of our devs all heavily using AI as they please to augment their workflow, and our CTO was genuinely concerned that our main competitor, a Python shop, had a leg-up on us for this exact reason.
He spent time running benchmarks for 0-1 apps and all kinds of other metrics and found basically no appreciable difference in the speed or accuracy of AI at generating Elixir vs. Python. Maybe some difference, but honestly it just doesn't exist enough to matter.
> But as someone interested in their career, it makes no sense to focus on a language or technology that is not popular
A: why in gods name
B: Every language, every framework and every tech stack is 1 month to 5 years away from being legacy crap. Unless you're learning something like KOBOL it's better to be able to use a variety of languages and show that you can adapt.
About LLMs: I did last year's advent or code with Elixir and when I forgot to turn off copilot it had no trouble writing whole implementations of functions, even if I had a very idiosyncratic style.
Most code is boilerplate and that's where LLMs shine, I don't think this specific issue is very important.
I’ve made a very lucrative career moving from .NET to BEAM. I don’t even work with it currently but the fact I’ve shipped it for some pretty niche systems shows versatility and consistently goes in my favour when getting hired.
You might not like LLM code generation or corporations encouraging it. Just like I might not like gravity. But I am not going to jump out of a 25 story building. I accept reality for what it is.
I have a strange feeling that most people haven’t found a method to get over there addictions to food and shelter. If they want to exchange labor for money to support those addictions, they have to care about what recruiters want whether external recruiters or internal recruiters.
I don’t find the developer experience to be good, its not just the lack of types altogether but also the delays imposed by having compilation steps to run tests.
A lot of the affordances in the ecosystem have been supplanted by more modern solutions for many use cases, like Kubernetes.
Elixir also opens a number of footguns like abuse of macros; these are some of the reasons to second guess switching.
I think that one of the strongest reasons for switching would be that if you are willing to trade off all of this in exchange for the ability to do zero downtime deploys, not just graceful shutdowns and rollovers. Like if you’re building a realtime system with long lived interactions, like air traffic control system or live conferencing systems.
It can sometimes feel like an esoteric or regrettable choice for a rest api or rpc/event driven system. Even if you want a functional language there may be better choices like kotlin.
As someone who who's a polygot programmer, I've always agreed with this in theory; however, the biggest challenge I've found in giving Elixir a shot is that, well the job market doesn't seem to favor ANY elixir jobs out there...especially for someone who's only made 'toy' apps in Phoenix. And for prototyping apps, I'm just faster in Ruby/Rails to make it worth it PLUS if you want to debug ML/LLM scripts you have to know Python anyways.
Any recommendations for someone looking to break into the Elixir space in a serious (job-related/production app) way?
A lovely language with an incredible web framework (Phoenix, LiveView). However, not easy to pick up for people with only imperative programming experience.
I had to switch my project to .NET in the end because it was too hard to find/form a strong Elixir team. Still love Elixir. Indestructible, simple, and everything is easy once you wrap your head around the functional programming.
As someone who has spent my whole career in somewhat niche things (ROS, OpenWRT, microcontrollers, Nix), I think the answer for how to hire for these is not to look for someone who already has that specific experience but rather look for someone curious, the kind of person who reads wikipedia for fun, an engineer who has good overall taste and is excited to connect the dots between other things they've learned about and experimented with.
Obviously that's not going to give you the benefit of a person who has specifically worked in the ecosystem and knows where the missing stairs are, which does definitely have its own kind of value. But overall, I think a big benefit of working in something like Elixir, Clojure, Rust, etc is that it attracts the kind of senior level people who will jump at the opportunity to work with something different.
And what happens when I’m looking for that next job? I haven’t interviewed for a pure developer job since 2018. But the last time I did, I could throw my resume up on the air and find a job as someone experienced with C# and knew all of the footguns and best practices and the ecosystem. I’m sure the same is true for Java, Typescript, Python, etc.
One nice side effect of having done this is having a small rolodex of other people who are like that.
So, like, if I had a good use case for Elixir and wanted a pal to hack on that thing with, I know a handful of people who I'd call, none of whom have ever used Elixir before but I know would be excited to learn.
Yes, same here. And that has come in very handy more than once. But my merry band of friends isn't getting any younger, I think the youngest in our group is now mid 30s or so, the bulk between 50 and 60.
Elixir is dead simple to use and the LLMs do a good job with the Phoenix boilerplate now. Hex.pm and Mix for building and dependency management are miles better than anything node has to offer as well. The developer experience is just really good.
Using any of them for backend is insane, but what do I know, I have to suck it up and use Next.js for some SaaS extension SDKs, while I would rather be using JVM, CLR, or even Go with its spartan design.
Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Conversely all the node+typescript projects, big and small, have been pretty great the last 10+ years or so. (And the C# .NET ones).
I use python for real data projects, for APIs there are about half a dozen other tech stacks I’d reach for first. I’ll die on this hill these days.
100% same experience. If it were up to me, I'd started with typescript, but the client insisted on using a python stack (landed on FastMCP, FastAPI, PydanticAI).
While, `PydanticAI` does the best it can with a limited type system, it just can't match the productivity of typescript.
And I still can't believe what a mess async python is. The worst thing we've encountered was a bug from mixing anyio with asyncio which resulted in our ECS container getting it's CPU pinned to 100% [1]. And constantly running into issue with libraries not handling task cancellation properly.
I get that python has captured the ML ecosystem, but these agent systems are just API calls and parsing json...
async python has problems, but "anyio exists" is not one of them that can be blamed on python, simply dont use weird third party libraries trying to second guess the asyncio architecture
edit: ironically I'm the author of a weird third party library trying to second guess the asyncio architecture but mine is goodhttps://awaitlet.sqlalchemy.org/en/latest/ (but I'll likely be retiring it in the coming year due to lack of interest)
I don't recall the exact situation but am I suppose to just know which async library each dependency is using? It reminds of of the early days of promises in JavaScript.
The funny thing is all the python people will tell you how great FastAPI is and how much of an improvement it is over what came before.
FastAPI does have a few benefits over express, auto enforcing json schemas on endpoints is huge, vs the stupidity that is having to define TS types and a second schema that then gets turned into JSON schema that is then attached to an endpoint. That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
The auto generated docs in FastAPI are also cool, along with the pages that let you test your endpoints. It is funny, Node shops setup a postman subscription for the team and share a bunch of queries, Python gets all that for free.
But man, TS is such a nice language, and Node literally exists to do one thing and one thing only really well: async programming.
> That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
Just define all your types as TypeBox schemas and infer the schema from that validator. This way you write it once, it's synced and there's no need for a compiler plugin.
Maybe you have non TS clients, but I moved to tRPC backends and now my consumers are perfectly typed at dev time, combined with pnpm monorepos I’m having a lovely time.
> Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Very painfully.
I avoid the async libs where possible. I'm not interested in coloring my entire code-base just for convenience.
>Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
In my experience async is something that node.js engineers try to develop/use when they come from node.js, and it's not something that python developers use at all. (with the exception of python engineers that add ASGI support to make the language enticing to node developers.)
Multiple processes, multiple threads per process, and/or greenlets (monkey patch network calls, like async but no keywords involved). Scale out horizontally when there's a problem. It could get expensive.
Hell I write ETL pipelines in Typescript since it's just... way easier to deal with. Not doing any crazy ML processing. But the Node ecosystem is giant and I'm a huge fan of the ergonomics of Typescript. And since Typescript is so popular it's very easy for other developers to make changes in my code.
I don't see it mentioned enough in the comments here, but not considering Celery as an alternative to Django + async really is the missing puzzle piece here. Aside from application-level options that weren't explored, I'm wondering whether handling some of the file IO stuff with, for instance, nginx, might be a better fit for their use case.
Once you're in the situation of supporting a production system with some of the limitations mentioned, you also owe it to yourself to truly evaluate all available options. A rewrite is rarely the right solution. From an engineering standpoint, assuming you knew the requirements pretty early on, painting yourself into a bad enough corner to scrap the whole thing and pick a new language gives me significant pause for thought.
In all honesty I consider a lot of this blog post to be a real cause for concern -- the tone, the conflating arguments (if your tests were bad before, just revisit them), the premature concern around scaling. It really feels like they may have jumped to an expensive conclusion without adequate research.
In an interview, I would not advance a candidate like this. If I had a report who exhibited this kind of reasoning, I'd be drilling them on fundamentals and double-checking their work through the entire engineering process.
Hi! Could you please tell me what use cases would nginx be better for, outside of serving static files?
Moreover, having worked with Django a bit (I certainly don't have as much experience as you do), it seems to me that anything that benefits from asynchrony and is trivial in Node is indeed a pain in Django. Good observability is much harder to achieve (tools generally support Node and its asynchrony out of the box, async python not so much), Celery is decent for long running, background, or fire and forget tasks, but e.g. using it to do some quick parallel work, that'd be a simple Promise.all() is much less performant (serialize your args, put it in redis, wait for a worker to pick it up, etc), doing anything that blocks a thread for a little bit, whether in Django or Celery,is a problem, because you've got a very finite amount of threads (unless you use gevent, which patches stdlib, which is a huge smell in itself), and it's easy to run out of them... Sure, you can work around anything, but with Node you don't have to think about any of this, it just works.
When you're still small, isn't taking a week to move to Node a better choice than first evaluating a solution to each problem, implementing solutions, each of which can be more or less smelly (which is something each of your engs will have to learn and maintain... We use celery for this, nginx for that, also gevent here because yada yada, etc etc), which in total might take more days and put a much bigger strain on you in the long term? Whereas with Node, you spend a week, and it all just works in a standard way that everyone understands. It seems to me that exploring other options first would indeed be a better choice, but for a bigger project, not when the rewrite is that small.
I tried to use Celery for something extremely trivial (granted, 5+ years ago). It was so badly documented and failed to do basic things I would expect from a task queue (like progress reporting) I have no idea why it was and still is popular.
Just because you couldn't figure it out doesn't mean the capability wasn't there. More than ten years ago at this point I was running a massively scaled Celery + RabbitMQ + Redis deployment with excellent off-the-shelf reporting using Flower.
I'm going to back GP, eight years ago my team developed a system that was basically an async, scheduled, async task queue, we used celery, and it was so buggy and so troublesome that at some point we switched to a hastily hacked own solution and it worked better.
It's entirely likely that we did something wrong and misused celery. But if many people have problems with using a system correctly then it's also something worth considering.
As a long-time Django user, I would not use Django for this. Django async is probably never the right choice for a green-field project. I would still pick FastAPI/SQLAlchemy over Express and PostHog. There is no way 15 different Node ORMs are going to survive in the long run, plus Drizzle and Prisma seem to be the leaders for now.
> At this point, some people are probably screaming at their screens going: "just use FastAPI!" -- and we did indeed consider it.
Working with both sync Django and async FastAPI daily, it’s so easy to screw up async FastAPI and bring things to a halt. If async is such the huge key feature they seem to think it is for their product, then I would agree moving away from Python early while it’s still relatively easy is the right call.
> and we had actually already written our background worker service in Node,
Ok well that’s a little bizarre… why use Django to begin with if you are not going to use the huge ecosystem that comes with it. New Django has first-class support for background workers, not that Celery is difficult to get setup. It’s sounds like the engineering team just started building things in what they knew without any real technical planning and the async hiccup is more or less an excuse to get things in order after the fact.
I think the baggage goes both ways - Django has the advantage of being a "complete & proven recipe" vs. Node where you try to lego-together an app out of dependencies that have deprecation warnings even in their latest versions.
> Django has the advantage of being a "complete & proven recipe"
I work on a large Django codebase at work, and this is true right up until you stray from the "Django happy path". As soon as you hit something Django doesn't support, you're back to lego-ing a solution together except you now have to do it in a framework with a lot of magic and assumptions to work around.
It's the normal problem with large and all-encompassing frameworks. They abstract around a large surface area, usually in a complex way, to allow things like a uniform API to caches even though the caches themselves support different features. That's great until it doesn't do something you need, and then you end up unwinding that complicated abstraction and it's worse than if you'd just used the native client for the cache.
I don't agree with this cache take. Adding operations to the cache is easy. Taking the django-redis project as an example there are only two levels until you reach redis-py: The cache abstraction and the client abstraction.
I'm having a hard time imagining a case where you'd be worse off with Django (compared to whatever alternative you may have chosen) in the case where the happy path for the thing you're trying to do doesn't exist natively in Django. Either way you're still farming out that capability to custom code or a 3rd party library.
I guess if you write a lot of custom code into specific hooks that Django offers or use inheritance heavily it can start to hurt. But at the end of the day, it's just python code and you don't have to use abstractions that hurt you.
you can just run part of django. So the negatives of it being “massive” is really just the size of the library that will just be sitting there on disk. which is really not a big deal in most situations.
As far as going with what you know vs choosing the best tool for the job, that can be a bit of a balancing act. I generally believe that you should go with what the team knows if it is good enough, but you need to be willing to change your mind when it is no longer good enough.
I worked at a mid-size startup that was still running on Python 2.7 and Django for their REST APIs, as late as 2022. It was pretty meh and felt like traveling back in time 10 years.
2.7 was end-of-life in 2020! And Python 3 outdates 2.7 by a few years.
A company using 2.7 in 2022 is an indicator that the company as a whole doesn't really prioritize IT, or at least the project the OP worked on. By 2017 or so, it should have been clear that whatever dependencies they were waiting on originally were not going to receive updates to support python3 and alternative arrangements should be made.
You captured the fundamental issues. There were mountains of technical debt. I recall encountering a dependency that had not been updated in over 10 years.
We have VB deployments that haven't been changed at all in about that long. Finally got approval to do a rewrite last year, which is python 3.6 due to other dependencies we can't upgrade yet.
It got this bad because the whole thing "just worked" in the background without issues. "Don't fix what isn't broken" was the business viewpoint.
libuv has provided an async interface for io using a worker thread pool for a decade, no dependency on io_uring required. I guess the threadpool they mention that aiofiles uses is written in python, so it gives concurrency but retains the GIL, so no parallelism. Node's libuv async stuff moves all the work off the main thread into c/c++ land until results are ready, only when dealing with the completed data read event does it re-enter the NodeJS "GIL" JavaScript thread.
To be clear: libuv has had the ability to offload (some?) I/O operators to io_uring since v1.45.0, from 2023, and that's the 8x speed improvement. 2024 is when node.js seemed to enable (or rather, stop disabling) io_uring by default in its own usage of libuv.
Yeah if you look at the libuv release history there’s been a lot of adding and subtracting since then. It’s clearly not all settled but there are chunks.
I probably would have pushed for Hono as the underlying framework... That said, I've been a fan of Koa/Oak over Express for a very long time. For API usage, the swagger+zod integration is pretty decent, though it changes the typical patterns a bit.
All-in, there's no single silver bullet to solving a given issue. Python has a lot of ecosystem around it in terms of integrations that you may or may not need that might be harder with JS. It really just depends.
Glad your migration/switch went relatively smoothly all the same.
It depends on your use case. Exactly. If you’re building big data intensive pipelines with lots of array manipulation or matrix multiplications you know what will shine. Building user facing APIs, use something with types and solid async.
Matching your latter definition, I'd be inclined to go with Rust or C#... that said, you can go a long way with TS in Node/Deno/Bun/Cloudflare/Vercel, etc.
This kinda resonates with me. I've been using Python for over a decade and the only async method I trusted was gevent, and since I moved to Go after 2016, I never want use Python async in production level project, even I came back to see how it goes almost every year.
But since Python's LLM ecosystem is so well, I really appreciate the courage it takes to migrate to Node when writing a RAG system. I've tried similar things recently, working on a document analyzing project using React Router as the full-stack framework, while put some ETL related work on the Python side, use inngest to bridge Node and Python services. In this way, I got the benefit of Node for LLM chat, while stil able to facilitate Python's SOTA ETL libraries.
I do a lot of glueware and semi-embedded stuff with Python... but my goto these days for anything networky is Elixir (LiveView if ux). If I need an event loop, async that is more than a patched on keyword, it just rocks. It is amazing to me how much Elixir does not have, and yet how capably it solves so many problems that other languages have had to add support for to solve.
So basically you just rewrote boilerplate code with complexity of "hello world" and deploy scripts. Without any dependencies, data migrations, real user data and downtime SLA. And after that you had time to write quite long article.
This didn't make sense to me either? If it only took three days for a complete rewrite to another language, what's the problem? Did I read they were getting interrupted for user requests? felt weird.
I would have picked Hono and Drizzle. In part because of the great TS support but also Hono is much faster than Express and supports Zod for validation out of the box. This stack would also allow to use any other runtime (Deno, Bun, or Cloudflare Workers).
Given they used TS and performance was a concern I would also question the decision to use Node. Deno or Bun have great TS support and better performance.
I checked it out and it looks good on paper but it only runs on Bun.
Don't get me wrong, I use Bun and I'm happy with it, but it's still young. With Hono/Drizzle/Zod I can always switch back to Node or Deno if necessary.
I wouldn't call it seamless, having also done this recently. (Handler func signature is different) But it is relatively straight forward without major changes to the code needed
Made pretty much the same comment Hono + Zod + Swagger is pretty nice all around. Not to mention the portability for different runtime environments. I also enjoy Deno a lot, it's become my main shell scripting tool.
I think it makes sense to start with node.js... it's the standard and widely supported. Eventually it should not be too difficult to switch to bun or deno if the need arises.
I'm more a fan of just a sql template string handler... in C#/.Net I rely on Dapper... for node, I like being able to do things like...
const results = await query`
SELECT...
FROM...
WHERE x = ${varname}
`;
Note: This is not sql injection, the query is a string template handler that creates a parameterized query and returns the results asynchronously. There's adapters for most DBs, or it's easy enough to write one in a couple dozen lines of code or less.
ORMs not only help with the result of the query but but also when writing queries. When I wrote SQL I was constantly checking table names, columns, and enums. With a good ORM like EF Core not only you get autocomplete, type checking, etc but dealing with relationships is much less tedious than with SQL. You can read or insert deeply nested entities very easily.
Obviously ORMs and query builders won't solve 100% of your queries but they will solve probably +90% with much better DX.
For years I used to be in the SQL-only camp but my productivity has increased substantially since I tried EF for C# and Drizzle for TS.
VS Code plugs into my DB just fine for writing SQL queries...
With an ORM, you can also over-query deeply nested related entities very easy... worse, you can then shove a 100mb+ json payload to the web client to use a fraction of.
No, but it does put you closer to the actual database and makes you think about what you're actually writing. You also aren't adding unnecessary latency and overhead to every query.
Also the overhead of good ORMs is pretty minimal and won't make a difference in the vast majority of cases. If you find a bottleneck you can always use SQL.
Bit of a plug but I just started working on a drizzle-esque ORM[1] for Python a few days ago and it seems somewhat appropriate for this thread. Curious whether anyone thinks this is a worthwhile effort and/or a good starting point syntax-wise.
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
The only issue with writing a CLI in Node is ecosystem. The CLI libraries for Node are (or were last time I checked) inspired by React. Not a paradigm that is fun to write in, and if I'm making a CLI tool it is because I am bored and want to make something for my own entertainment.
I don’t know a ton about either but now I am curious if I should takeaway the idea that async with Python is problematic or if only async with Django is the issue.
I'm using it for a hobby project, and pretty pleased.
My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.
Everyone's definition of "production quality" is different :-), but Joist is a "mikro-ish" (more so ActiveRecord-ish) ORM that has a few killer features:
We did the same for our app as well. I wrote a little library to make it as simple as FastAPI to generate swagger specs - you can try it out - https://github.com/sleeksky-dev/alt-swagger .
I like mikro orm - cool to see you use that. I do prefer django however.
I see express as the backend. Why not nestjs? And are you using openapi at all for generating your frontend client?
What i've discovered is - any backend + orm should expose an openapi spec'd backend... and your frontend can autogen your client for you. Allows you to move extremely quick with the help of ai.
I always find this line of thought strange. It's as if the entire team hinges their technical decision on a single framework, when in reality it's relatively easy to overcome this level of difficulties. This reminds me of the Uber blunder - the same engineer/team switched Uber's database from MySQL to Postgres and then from Postgres to MySQL a few years later, both times claiming that the replaced DB "does not scale" or "sucks". In reality, though, both systems can work very well, and truth be told, Uber's scale was not large enough for either db to show the difference.
Do yourself a favor and use Elixir. Elixir has almost the same top libraries from Python you need to work with AI. As a matter of fact, the Elixir versions are far less fragile and more reliable in production use cases. I documented my journey of writing an AI app using Elixir and listed out the top libraries you can use, especially if you're coming from Python:
If we ignore the ML/AI/array libs, where Python shines, the core development has really done nothing much for it since 3.0.
Despite MS, Guido and co throwing their weight, still none of the somewhat promised 5x speedup across the board (more like 1.5x at best), the async story is still a mess (see TFA), the multiple-interpreters/GIL-less is too little, too late, the ecosystem still doesn't settled on a single dependency and venv manager (just make uv standard and be done with it), types are a ham-fisted experience, and so on, and so forth...
lol sounds more like a bunch of front end developers who don’t know what they are doing wanted to use a language they use on the front end on the backend.
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
I often see people complain about how async is implemented in certain languages or frameworks - are there any examples where people actually like how async was designed or handled?
After having used it two weeks ago for the first time: it feels as though async support in Python is basically a completely parallel standard library that uses the same python syntax with extra keywords all over the place. It's like if building code compliance required your 50 year old house to be updated have a wider staircase with deeper steps but you wanted to do so without affecting the existing stairs, so now you just have two staircases which are a little bit different and it feels like it takes up space unnecessarily.
I had to look for async versions of most of what I did (e.g. executing external binaries) and use those instead of existing functions or functionality, meaning it was a lot of googling "python subprocess async" or "python http request async".
If there were going to be some kind of Python 4.x in the future, I'd want some sort of inherent, goroutine-esque way of throwing tasks into the ether and then waiting on them if you wanted to. Let people writing code mark functions as "async'able", have Python validate that async'able code isn't calling non-async'able code, and then if you're not in an async runloop then just block on everything instead (as normal).
If I could take code like:
def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = get_image(imagename)
print(result)
And replace it with:
def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = async get_image(imagename)
print(result)
And just have the runtime automatically await the result when I try to access it if it's not complete yet then it would save me thousands of lines of code over the rest of my career trying to parallelize things in cumbersome explicit ways. Perhaps provide separate "async" runners that could handle things - if for example you do explicitly want things running in separate processes, threads, interpreters, etc., so you can set a default async runner, use a context manager, or explicitly threadpool.task(async get_image(imagename)).
You can have this today or 15+ years ago using the excellent gevent library for Python. Python 3 should have just endorsed gevent as the blessed solution instead of adding function coloring and new syntax, but you can blissfully ignore all of that if you use gevent.
Doing zero upfront research or planning and then bragging about it in public like this is pretty suspect, but I guess more to the point, glorifying "the pivot" like this is out of style anyway. You're now supposed to insist that whatever happened was the plan all along.
We should really be at a point where the application is a Yaml file and a something like Hugo for backends, and you can force it to use --java or --js or --rust or --python, etc...
When they start moving away from API calls to third parties to their own embeddings or AI they’re in for a bad time.
What’s going to end up happening is they’ll then create another backend for AI stuff that uses python and then have to deal with multiple backend languages.
They should have just bit the bullet and learned proper async in FastAPI like they mentioned.
theres caolan async if you need series and parallel controls
theres rxjs if you need observables
on web frameworks hono seems nice too. if you need performance, theres uwebsockets.js which beats all other web frameworks in http and websocket benchmarks.
for typesafety aside from typescript, theres ark, zod, valibot, etc.
From the proverbial frying pan into the fire. If you're going to go through all of the effort and cost to switch platforms and to retrain your developers, why on earth would you pick Node.js?
Node.js is such an incredible mess. The ideas are usually ok but the implementation details, the insane dependencies (first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked), the lack of stability, the endless supply chain attacks, maintainers headaches and so on, there is very little to like about Node.js.
C# before Node.js and I can't stand C#. Java Before C#. Yes, it's a language rant, but in the case of Node I am really sorry.
So you'd recommend they rewrote their Python project in Java (assuming the rewrite itself was a good idea)? I don't have any experience on a production web server written in Java or C#, but they both seem like a more difficult transition than JavaScript for rewriting a Python codebase.
For the uninformed, C# and TypeScript are very similar[0]
In fact, JavaScript has borrowed a lot from C# including async/await, lambda expressions, and the syntax for disposables -- all influenced by and done first in C#.
Of course, TypeScript and C# are from the same designer at Microsoft so there are even more similarities. Any team that's considering moving to TypeScript should also really give C# a look.
I've written code in all of these and I think that Python to Java or Go is easier than Python to Node, especially if you don't want to spend the next 24 months auditing all of the code you just imported.
Who is the audience for a post like this? Presumably HN, since the author invoked PG.
But who is "we rewrote our stack on week 1 due to hypothetical scaling issues" supposed to impress? Not software professionals. Not savvy investors. Potential junior hires?
I'm actually building an app on the side and went the other way around on this. Migrating from Typescript back to Python. Granted, my gripes were more with NextJS rather than Node or Typescript.
Using Django was so intuitive although the nomenclature could do a bit better. But what took me days trying to battle it out on NextJS was literally done in an hour with Django (admin management on the backend). While Django still looks and feels a bit antiquated, at least it worked! Meanwhile I lost the entirety of the past weekend (or rather quite a bit of it), trying to fight my code and clean up things in NextJS because somehow the recommended approach for most things is mixing up your frontend and backend, and a complete disregard for separation of concerns.
My new stack from now one will likely be NextJS for the frontend alone, Django for the CRUD and authentication backend, Supabase Edge functions for serverless and FastAPI if needed for intensive AI APIs. Open to suggestions, ideas and opinions though.
I had a python script I was writing that basically just needed to the same shell command 40 times (to clone images from X to Y) and a lot of the time was spent making the request and waiting for the data to be generated so I figured I'd parallelize it.
Normally I do this either through multiprocessing or concurrent.futures, but I figured this was a pretty simple use case for async - a few simple functions, nothing complex, just an inner loop that I wanted to async and then wait for.
Turns out Python has a built in solution for this called a TaskGroup. You create a TaskGroup object, use it as a context manager, and pass it a bunch of async tasks. The TaskGroup context manager exits when all the tasks are complete, so it becomes a great way to spawn a bunch of arbitrary work and then wait for it all to complete.
It was a huge time saver right up until I realized that - surprise! - it wasn't waiting for them to complete in any way shape or form. It was starting the tasks and then immediately exiting the context manager. Despite (as far as I could tell) copying the example code exactly and the context manager doing exactly what I wanted to have happen, I then had to take the list of tasks I'd created and manually await them one by one anyway, then validate their results existed. Otherwise Python was spawning 40 external processes, processing the "results" (which was about three incomplete image downloads), and calling it a day.
I hate writing code in golang and I have to google every single thing I ever do in it, but with golang, goroutines, and a single WaitGroup, I could have had the same thing written in twenty minutes instead of the three hours it took me to write and debug the Python version.
So yeah, technically I got it working eventually but realistically it made concurrency ten times worse and more complicated than any other possible approach in Python or golang could have been. I cannot imagine recommending async Python to anyone after this just on the basis of this one gotcha that I still haven't figured out.
I was about to migrate a legacy system written in Python/ Flask to FastAPI and React (frontend). But the sentiments here seem to suggest that FastAPI is not the best solution if I need async? So go with Next.js?
Good decision, judging by their general level of impatience with things they would have hated my ORM :).
Also I think the node approach is probably still more performant than FastAPI but that's just a hunch.
Hopefully they won't have security issues because someone hijacked the node package that sets the font color to blue or passes the butter or something.
We're on the same wavelength, i have decades of ORM experience. It was the first thing i woudl do in any project. Now it can just be vanilla JDBC with tons of duplicated boilerplate. AT least in the early stages.
But a part of me is reading this and thinking "friend... if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?" Like, you have the counterexample right there! Other companies are making the "technically worse" choice but making it work.
I love coding and I recognize that human beings are made of narratives, but this feels like 3 days you could have spent on customer needs or feature dev or marketing, and instead you rolled around in the code mud for a bit. It's fine to do that every now and then, and if this was a more radical jump (e.g. a BEAM language like Elixir or Gleam, or hell, even Golang, which has that preemptive scheduler + fast compiles/binary deploys + designed around a type system...) than I'd buy it more. And I'm not in your shoes so it's easy to armchair quarterback. But it smells a bit like getting in your head on technical narratives that are more fun to apply your creativity to, instead of the ones your company really needs.
Python didn't cause their problems, Django did. They wanted async, but chose a framework that doesn't really support it. And they weren't even running it on an async app server.
Python didn't work for them because every subsequent choice they made was wrong.
More seriously, I've worked on codebases I found ok, and some I deeply disliked, I guess there's a continuum from "exciting" to "frustrating".
We have a whole posthog interface layer to mask over their constant outages and slowness. (Why don't we ditch them entirely? I, too, often ask this, but the marketing people love it)
Also, considering the project is an AI framework, do you think the language ChatGPT is built on is a worse choice than the language we use because it's in the browser?
Because language bindings isn't really what makes ChatGPT tick.
Personally I don't think there's anything wrong with scratching that itch, especially if its going to make you/your team more comfortable long term. 3 days is probably not make-or-break.
To be honest, I never liked the way async is done in python at all.
However, I love Django and Python in general. When I need "async" in a http cycle flow, I use celery and run it in background.
If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side. Either it's Chat response with LLM or importing a huge CSV file.
Simple rule for me is, "don't waste HTTP time, process quick and return quick".
SSE is nice.
I use a combination or channels and celery for a few projects and it’s works great.
but I still hope at some point they will manage to fix the devx with django/python and async
With LLMS, you shit out working production ready web apps in 2 days now that are quite performant, as long as you don't care about code maintainability long term.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
Give it a shot.
People are reimplementing things that are first class citizens in elixir. Live content update, job runners, queues... Everything is built into the language. Sure you can do it all in typescript, but by then you'll be importing lots of libraries, reimplementing stuff with less reliability and offloading things like queues to third party solutions like pulsar or kafka.
People really should try elixir. I think the initial investment to train your workforce pays itself really quick when you don't have to debug your own schedulers and integrations with third party solutions. Plus it makes it really easy to scale after you have a working solution in elixir.
It's interesting, for some people Elixir really clicks, others can't make heads or tails of it. I don't mind Erlang either, but I understand that that is really an acquired taste.
But your comment has convinced me to try it since I am having a bit of NextJS burnout.
what about elixir that eliminates the need for kafka. simple queues I understand but kafka ?
There are probably less code samples and let’s be honest this is 2025, how well do LLMs generate code for obscure languages where the training data is more sparse?
I've had 3 Elixir jobs and 2 Rust jobs in the last 10 years. All were on real products, not vaporware. I learned a ton, worked with great people, and made real friends doing it.
Luck? Skill? Who knows. It's not impossible to work with the technology of your choice on problems you find interesting if you're a little intentional.
Nothing ever gets better if everybody just does what's already popular.
He spent time running benchmarks for 0-1 apps and all kinds of other metrics and found basically no appreciable difference in the speed or accuracy of AI at generating Elixir vs. Python. Maybe some difference, but honestly it just doesn't exist enough to matter.
A: why in gods name B: Every language, every framework and every tech stack is 1 month to 5 years away from being legacy crap. Unless you're learning something like KOBOL it's better to be able to use a variety of languages and show that you can adapt.
Most code is boilerplate and that's where LLMs shine, I don't think this specific issue is very important.
LOL. Speaking about absolutely horrible ideas ...
As an acceptor of reality, you can begin to accept that as well.
A lot of the affordances in the ecosystem have been supplanted by more modern solutions for many use cases, like Kubernetes.
Elixir also opens a number of footguns like abuse of macros; these are some of the reasons to second guess switching.
I think that one of the strongest reasons for switching would be that if you are willing to trade off all of this in exchange for the ability to do zero downtime deploys, not just graceful shutdowns and rollovers. Like if you’re building a realtime system with long lived interactions, like air traffic control system or live conferencing systems.
It can sometimes feel like an esoteric or regrettable choice for a rest api or rpc/event driven system. Even if you want a functional language there may be better choices like kotlin.
??
Elixir is strongly but dynamically typed.
On the progress of static typing:
https://arxiv.org/abs/2306.06391
Any recommendations for someone looking to break into the Elixir space in a serious (job-related/production app) way?
I had to switch my project to .NET in the end because it was too hard to find/form a strong Elixir team. Still love Elixir. Indestructible, simple, and everything is easy once you wrap your head around the functional programming.
It. Just. Works.
Obviously that's not going to give you the benefit of a person who has specifically worked in the ecosystem and knows where the missing stairs are, which does definitely have its own kind of value. But overall, I think a big benefit of working in something like Elixir, Clojure, Rust, etc is that it attracts the kind of senior level people who will jump at the opportunity to work with something different.
One nice side effect of having done this is having a small rolodex of other people who are like that.
So, like, if I had a good use case for Elixir and wanted a pal to hack on that thing with, I know a handful of people who I'd call, none of whom have ever used Elixir before but I know would be excited to learn.
Conversely all the node+typescript projects, big and small, have been pretty great the last 10+ years or so. (And the C# .NET ones).
I use python for real data projects, for APIs there are about half a dozen other tech stacks I’d reach for first. I’ll die on this hill these days.
While, `PydanticAI` does the best it can with a limited type system, it just can't match the productivity of typescript.
And I still can't believe what a mess async python is. The worst thing we've encountered was a bug from mixing anyio with asyncio which resulted in our ECS container getting it's CPU pinned to 100% [1]. And constantly running into issue with libraries not handling task cancellation properly.
I get that python has captured the ML ecosystem, but these agent systems are just API calls and parsing json...
[1](https://github.com/agronholm/anyio/issues/884)
edit: ironically I'm the author of a weird third party library trying to second guess the asyncio architecture but mine is good https://awaitlet.sqlalchemy.org/en/latest/ (but I'll likely be retiring it in the coming year due to lack of interest)
FastAPI does have a few benefits over express, auto enforcing json schemas on endpoints is huge, vs the stupidity that is having to define TS types and a second schema that then gets turned into JSON schema that is then attached to an endpoint. That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
The auto generated docs in FastAPI are also cool, along with the pages that let you test your endpoints. It is funny, Node shops setup a postman subscription for the team and share a bunch of queries, Python gets all that for free.
But man, TS is such a nice language, and Node literally exists to do one thing and one thing only really well: async programming.
Just define all your types as TypeBox schemas and infer the schema from that validator. This way you write it once, it's synced and there's no need for a compiler plugin.
https://github.com/sinclairzx81/typebox?tab=readme-ov-file#u...
The TS compiler should either have an option to pop out JSON schema from TS types or have a well defined plugin system to allow that to happen.
TS being compile time only really limits the language. It was necessary early on to drive adoption, but now days it just sucks.
Very painfully.
I avoid the async libs where possible. I'm not interested in coloring my entire code-base just for convenience.
In my experience async is something that node.js engineers try to develop/use when they come from node.js, and it's not something that python developers use at all. (with the exception of python engineers that add ASGI support to make the language enticing to node developers.)
Once you're in the situation of supporting a production system with some of the limitations mentioned, you also owe it to yourself to truly evaluate all available options. A rewrite is rarely the right solution. From an engineering standpoint, assuming you knew the requirements pretty early on, painting yourself into a bad enough corner to scrap the whole thing and pick a new language gives me significant pause for thought.
In all honesty I consider a lot of this blog post to be a real cause for concern -- the tone, the conflating arguments (if your tests were bad before, just revisit them), the premature concern around scaling. It really feels like they may have jumped to an expensive conclusion without adequate research.
In an interview, I would not advance a candidate like this. If I had a report who exhibited this kind of reasoning, I'd be drilling them on fundamentals and double-checking their work through the entire engineering process.
Moreover, having worked with Django a bit (I certainly don't have as much experience as you do), it seems to me that anything that benefits from asynchrony and is trivial in Node is indeed a pain in Django. Good observability is much harder to achieve (tools generally support Node and its asynchrony out of the box, async python not so much), Celery is decent for long running, background, or fire and forget tasks, but e.g. using it to do some quick parallel work, that'd be a simple Promise.all() is much less performant (serialize your args, put it in redis, wait for a worker to pick it up, etc), doing anything that blocks a thread for a little bit, whether in Django or Celery,is a problem, because you've got a very finite amount of threads (unless you use gevent, which patches stdlib, which is a huge smell in itself), and it's easy to run out of them... Sure, you can work around anything, but with Node you don't have to think about any of this, it just works.
When you're still small, isn't taking a week to move to Node a better choice than first evaluating a solution to each problem, implementing solutions, each of which can be more or less smelly (which is something each of your engs will have to learn and maintain... We use celery for this, nginx for that, also gevent here because yada yada, etc etc), which in total might take more days and put a much bigger strain on you in the long term? Whereas with Node, you spend a week, and it all just works in a standard way that everyone understands. It seems to me that exploring other options first would indeed be a better choice, but for a bigger project, not when the rewrite is that small.
Thank you for your answers!
It's entirely likely that we did something wrong and misused celery. But if many people have problems with using a system correctly then it's also something worth considering.
There’s not much software I really dislike but Celery is one.
A nightmare within a nightmare to configure and run.
Django is great but sometimes it seems it just tries to overdo things and make them harder
Trying to async Django is like trying to do skateboard tricks with a shopping cart. Just don't
Working with both sync Django and async FastAPI daily, it’s so easy to screw up async FastAPI and bring things to a halt. If async is such the huge key feature they seem to think it is for their product, then I would agree moving away from Python early while it’s still relatively easy is the right call.
> and we had actually already written our background worker service in Node,
Ok well that’s a little bizarre… why use Django to begin with if you are not going to use the huge ecosystem that comes with it. New Django has first-class support for background workers, not that Celery is difficult to get setup. It’s sounds like the engineering team just started building things in what they knew without any real technical planning and the async hiccup is more or less an excuse to get things in order after the fact.
This sounds like standard case going with what developers know instead of evaluating tool for job.
I work on a large Django codebase at work, and this is true right up until you stray from the "Django happy path". As soon as you hit something Django doesn't support, you're back to lego-ing a solution together except you now have to do it in a framework with a lot of magic and assumptions to work around.
It's the normal problem with large and all-encompassing frameworks. They abstract around a large surface area, usually in a complex way, to allow things like a uniform API to caches even though the caches themselves support different features. That's great until it doesn't do something you need, and then you end up unwinding that complicated abstraction and it's worse than if you'd just used the native client for the cache.
I guess if you write a lot of custom code into specific hooks that Django offers or use inheritance heavily it can start to hurt. But at the end of the day, it's just python code and you don't have to use abstractions that hurt you.
Could you be more specific? Don't get me wrong, I'm well aware that npm dependency graph mgmt is a PITA, but curious where you an into a wall w/ Node.
As far as going with what you know vs choosing the best tool for the job, that can be a bit of a balancing act. I generally believe that you should go with what the team knows if it is good enough, but you need to be willing to change your mind when it is no longer good enough.
A company using 2.7 in 2022 is an indicator that the company as a whole doesn't really prioritize IT, or at least the project the OP worked on. By 2017 or so, it should have been clear that whatever dependencies they were waiting on originally were not going to receive updates to support python3 and alternative arrangements should be made.
It got this bad because the whole thing "just worked" in the background without issues. "Don't fix what isn't broken" was the business viewpoint.
"Python doesn't have native async file I/O." - like almost everybody, as "sane" file async IO on Linux is somehow new (io_uring)
Anyway ..
They claim about an 8x improvement in speed.
All-in, there's no single silver bullet to solving a given issue. Python has a lot of ecosystem around it in terms of integrations that you may or may not need that might be harder with JS. It really just depends.
Glad your migration/switch went relatively smoothly all the same.
But since Python's LLM ecosystem is so well, I really appreciate the courage it takes to migrate to Node when writing a RAG system. I've tried similar things recently, working on a document analyzing project using React Router as the full-stack framework, while put some ETL related work on the Python side, use inngest to bridge Node and Python services. In this way, I got the benefit of Node for LLM chat, while stil able to facilitate Python's SOTA ETL libraries.
What honest reaction you expect from readers?
It was a three day small task?
Given they used TS and performance was a concern I would also question the decision to use Node. Deno or Bun have great TS support and better performance.
Don't get me wrong, I use Bun and I'm happy with it, but it's still young. With Hono/Drizzle/Zod I can always switch back to Node or Deno if necessary.
"drizzle works on the edge"
I'm not sure what additional help you're getting. I'm just not a fan of ORMs as they tend to have hard edges in practice.
Obviously ORMs and query builders won't solve 100% of your queries but they will solve probably +90% with much better DX.
For years I used to be in the SQL-only camp but my productivity has increased substantially since I tried EF for C# and Drizzle for TS.
With an ORM, you can also over-query deeply nested related entities very easy... worse, you can then shove a 100mb+ json payload to the web client to use a fraction of.
Also the overhead of good ORMs is pretty minimal and won't make a difference in the vast majority of cases. If you find a bottleneck you can always use SQL.
However drizzle makes it very very straightfoward to handle DB migration / versioning, so I like it a lot for that.
https://github.com/carderne/embar
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
Sometimes tools are worth learning!
A function to display help, and another old to parse the CLI parameters isn't PhD level coding.
Also nowadays, any LLM friend can quickly generate them.
That is exactly what I am complaining about.
I guess some people like it, but just, ick.
My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.
https://joist-orm.io/
Always happy to hear feedback/issues if anyone here would like to try it out. Thanks!
Answer: Because Django doesn't support async by default.
I really wish the dev would extract the dependency injection portion of the project and flesh it out a bit. There are a lot of rough edges in there.
I see express as the backend. Why not nestjs? And are you using openapi at all for generating your frontend client?
What i've discovered is - any backend + orm should expose an openapi spec'd backend... and your frontend can autogen your client for you. Allows you to move extremely quick with the help of ai.
I always find this line of thought strange. It's as if the entire team hinges their technical decision on a single framework, when in reality it's relatively easy to overcome this level of difficulties. This reminds me of the Uber blunder - the same engineer/team switched Uber's database from MySQL to Postgres and then from Postgres to MySQL a few years later, both times claiming that the replaced DB "does not scale" or "sucks". In reality, though, both systems can work very well, and truth be told, Uber's scale was not large enough for either db to show the difference.
https://medium.com/creativefoundry/i-tried-to-build-an-ai-pr...
Despite MS, Guido and co throwing their weight, still none of the somewhat promised 5x speedup across the board (more like 1.5x at best), the async story is still a mess (see TFA), the multiple-interpreters/GIL-less is too little, too late, the ecosystem still doesn't settled on a single dependency and venv manager (just make uv standard and be done with it), types are a ham-fisted experience, and so on, and so forth...
I have a a simple wrapper that allows you write once and works for both sync/async https://blog.est.im/2025/stdout-04
lol sounds more like a bunch of front end developers who don’t know what they are doing wanted to use a language they use on the front end on the backend.
I always wanted an emacs with python as the underlying language. Is emacs brilliant choosing lisp or outdated?
I recently wrote about issues debugging this stack[1], but now I feel very comfortable operating async-first.
[1] https://blendingbits.io/p/i-used-claude-code-to-debug-a-nigh...
>Python async sucks
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
I had to look for async versions of most of what I did (e.g. executing external binaries) and use those instead of existing functions or functionality, meaning it was a lot of googling "python subprocess async" or "python http request async".
If there were going to be some kind of Python 4.x in the future, I'd want some sort of inherent, goroutine-esque way of throwing tasks into the ether and then waiting on them if you wanted to. Let people writing code mark functions as "async'able", have Python validate that async'able code isn't calling non-async'able code, and then if you're not in an async runloop then just block on everything instead (as normal).
If I could take code like:
And replace it with: And just have the runtime automatically await the result when I try to access it if it's not complete yet then it would save me thousands of lines of code over the rest of my career trying to parallelize things in cumbersome explicit ways. Perhaps provide separate "async" runners that could handle things - if for example you do explicitly want things running in separate processes, threads, interpreters, etc., so you can set a default async runner, use a context manager, or explicitly threadpool.task(async get_image(imagename)).Man, what a world that would be.
What’s going to end up happening is they’ll then create another backend for AI stuff that uses python and then have to deal with multiple backend languages.
They should have just bit the bullet and learned proper async in FastAPI like they mentioned.
I won’t even get started on their love of ORMs.
Or if feeling fancy, Erlang, Elixir.
theres effectts if you need app level control
theres caolan async if you need series and parallel controls
theres rxjs if you need observables
on web frameworks hono seems nice too. if you need performance, theres uwebsockets.js which beats all other web frameworks in http and websocket benchmarks.
for typesafety aside from typescript, theres ark, zod, valibot, etc.
Node.js is such an incredible mess. The ideas are usually ok but the implementation details, the insane dependencies (first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked), the lack of stability, the endless supply chain attacks, maintainers headaches and so on, there is very little to like about Node.js.
C# before Node.js and I can't stand C#. Java Before C#. Yes, it's a language rant, but in the case of Node I am really sorry.
In fact, JavaScript has borrowed a lot from C# including async/await, lambda expressions, and the syntax for disposables -- all influenced by and done first in C#.
Of course, TypeScript and C# are from the same designer at Microsoft so there are even more similarities. Any team that's considering moving to TypeScript should also really give C# a look.
[0] https://typescript-is-like-csharp.chrlschn.dev/pages/intro-a...
> As you get more familiar with computers you will understand more and more what's going on.
Pot, meet kettle.
And yes, Rust's package management was inspired by Node, and it is one of the major drawbacks of Rust.
But there’s giant red flags up if you’re trying to do async with Django, which is built as synchronous code.
But who is "we rewrote our stack on week 1 due to hypothetical scaling issues" supposed to impress? Not software professionals. Not savvy investors. Potential junior hires?
Using Django was so intuitive although the nomenclature could do a bit better. But what took me days trying to battle it out on NextJS was literally done in an hour with Django (admin management on the backend). While Django still looks and feels a bit antiquated, at least it worked! Meanwhile I lost the entirety of the past weekend (or rather quite a bit of it), trying to fight my code and clean up things in NextJS because somehow the recommended approach for most things is mixing up your frontend and backend, and a complete disregard for separation of concerns.
My new stack from now one will likely be NextJS for the frontend alone, Django for the CRUD and authentication backend, Supabase Edge functions for serverless and FastAPI if needed for intensive AI APIs. Open to suggestions, ideas and opinions though.
Normally I do this either through multiprocessing or concurrent.futures, but I figured this was a pretty simple use case for async - a few simple functions, nothing complex, just an inner loop that I wanted to async and then wait for.
Turns out Python has a built in solution for this called a TaskGroup. You create a TaskGroup object, use it as a context manager, and pass it a bunch of async tasks. The TaskGroup context manager exits when all the tasks are complete, so it becomes a great way to spawn a bunch of arbitrary work and then wait for it all to complete.
It was a huge time saver right up until I realized that - surprise! - it wasn't waiting for them to complete in any way shape or form. It was starting the tasks and then immediately exiting the context manager. Despite (as far as I could tell) copying the example code exactly and the context manager doing exactly what I wanted to have happen, I then had to take the list of tasks I'd created and manually await them one by one anyway, then validate their results existed. Otherwise Python was spawning 40 external processes, processing the "results" (which was about three incomplete image downloads), and calling it a day.
I hate writing code in golang and I have to google every single thing I ever do in it, but with golang, goroutines, and a single WaitGroup, I could have had the same thing written in twenty minutes instead of the three hours it took me to write and debug the Python version.
So yeah, technically I got it working eventually but realistically it made concurrency ten times worse and more complicated than any other possible approach in Python or golang could have been. I cannot imagine recommending async Python to anyone after this just on the basis of this one gotcha that I still haven't figured out.
I say that as someone who prefers JS promises: you likely won't face issues with either.
Also I think the node approach is probably still more performant than FastAPI but that's just a hunch.
Hopefully they won't have security issues because someone hijacked the node package that sets the font color to blue or passes the butter or something.
I started ripping them out of a java system even before that.