I've been playing with a toy app that dabbles in the Cal/CardDAV space, and it blows my mind that for all the power latest generation languages have, the thing I keep coming back to is PHP-based Sabre/DAV. That's not to say PHP isn't modern now, but instead a reflection of my surprise that there doesn't appear to be any other library out there that does as good or nearly as good a job at DAV as that one, and that one is pretty darn old.
On a different point, I don't think the author's point about having to "also" inspect the headers is a fair critique of DAV - HTTP headers are to indicate a certain portion of the request/response, and the body a different one. I wish it was simpler, but I think it's an acceptable round peg in a round hole use of the tools.
Author here, I'd be more inclined to agree about the headers if they were consistent. For instance, why is only Allow and DAV part of the header (and all of their bizarre options) and not things like supported report set or privileges? It would be better to have all of this in the body somehow, especially Depth.
I wrote a standalone CardDAV server ages ago and the biggest frustration for me was just how buggy the clients were. At some point I stopped self-hosting and moved on.
Mounting WebDAV -- if you are in a situation, where you have to do it (e.g. own^W^W^Wnextcloud) is such an adventure. Everything - mac, win, linux - supports WebDAV. You mount and it works! Then you notice HOW it works: files are downloaded in full before program can access them, some operations are super slow, some fail or time out, plaintext credentials in mysterious places...
I heard DeltaV is very advanced, and Subversion supported it. I'm afraid to ask.
I'm using the nextcloud app on my android, and for my Linux systems I mount WebDAV using rclone, with VFS cache mode set to FULL.
This way I can:
1. Have the file structure etc synced to local without downloading the files
2. Have it fetch files automatically when I try to read them. Also supports range requests, so if I want to play a video, it sort of streams it, no need to wait for download.
3. If a file has been accessed locally, it's going to be cached for a while, so even if I'm offline, I can still access the cached version without having to verify that it's the latest. If I'm online, then it will verify if it's the latest version.
Overall, this has worked great for me, but it did take me a while before I set it up correctly. Now I have a cache of files I use, and the rest of the stuff that I just keep there for backup or hogging purposes doesn't take disk space and stays in the cloud until I sync it.
Sine you are mounting and not syncing the files, what happens when you edit a file offline? And what if on another offline device the file is also edited?
Fair question. Conflicts happen, which I'm fine with.
Realistically speaking, most files I have in my cloud are read-only.
The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.
Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.
But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.
P.S. If anyone wants to set this up, here's my nixos config for the service, feel free to comment on it:
# don't forget to run `rclone config` beforehand
# to create the "nextcloud:" remote
# some day I may do this declaratively, but not today
systemd.services.rclone-nextcloud-mount = {
# Ensure the service starts after the network is up
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
requires = [ "network-online.target" ];
# Service configuration
serviceConfig = let
ncDir = "/home/username/nextcloud";
mountOptions = "--vfs-cache-mode full --dir-cache-time 1w --vfs-cache-max-age 1w";
in {
Type = "simple";
ExecStartPre = "/run/current-system/sw/bin/mkdir -p ${ncDir}"; # Creates folder if didn't exist
ExecStart = "${pkgs.rclone}/bin/rclone mount ${mountOptions} nextcloud: ${ncDir}"; # Mounts
ExecStop = "/run/current-system/sw/bin/fusermount -u ${ncDir}"; # Dismounts
Restart = "on-failure";
RestartSec = "10s";
User = "username";
Group = "users";
Environment = [ "PATH=/run/wrappers/bin/:$PATH" ];
};
};
You might wanna look into OpenCloud (formerly known as nextcloud-go) [1]. I still use Nextcloud for the uploading of files and the calendar (though I may switch the latter), but I now sync the dir with Immich. Performance-wise a relief. I also swapped Airsonic Advanced (Java) with Navidrome (Go). Same story.
Windows officialy removed support for WebDAV. It still works, but nothing is guaranteed. It has stupid limitation on file size of 10MB, it can be lifted to 2GB (max signed 32 bit number) in Registry, but it is still not very much in modern world (I wanted to share my medial library via WebDAV and failed due to this limitation). It lose credentials on regular basis, errors are too vague («Wrong credentials» means both mistyped password AND expired server certificate), etc.
Subversion works ok over webdav, it has done it for decades.
Mounting a directory through nfs, smb or ssh and files are downloaded in full before program access them. What you mean?
Listing a directory or accessing file properties, like size for example do not need full download.
I am confused, what do you mean? What OS forces you to download whole file over NFS or SMB before serving read()? Even SFTP does support reading and writing at an offset.
If I open a nfs doc with, let's say Libreoffice, will I not download whole file?
On a second thought, I think you are looking at webdav as sysadmins not as developers. Webdav was designed for document authoring, and you cannot author a document, version it, merge other authors changes, track changes without fully controlling resources. Conceptually is much like git needs a local copy.
I can't imagine how to have an editor editing a file and file is changed at any offset at any time by any unknown agent whitouth any type of orchestration.
If you open a file with LibreOffice will read the whole thing regardless of whether or not the file is on NFS or not.
The parent comment was stating that if you use the open(2) system call on a WebDAV mounted filesystem, which doesn't perform any read operation, the entire file will be downloaded locally before that system call completes. This is not true for NFS which has more granular access patterns using the READ operation (e.g., READ3) and file locking operations.
It may be the case that you're using an application that isn't LibreOffice on files that aren't as small as documents -- for example if you wanted to watch a video via a remote filesystem. If that filesystem is WebDAV (davfs2) then before the first piece of metadata can be displayed the entire file would be downloaded locally, versus if it was NFS each 4KiB (or whatever your block size is) chunk would be fetched independently.
But many others clients won't. In particular, any video player will _not_ download entire file before accessing it. And for images, many viewers start showing image before whole thing is downloaded. And to look at zip files, you don't need the whole thing - just index at the end. And for music, you stream data...
Requiring that file is "downloaded in full before program access them" is a pretty bad degradation in a lot of cases. I've used smb and nfs and sshfs and they all let you read any range of file, and start giving the data immediately, even before the full download.
That's the beauty of working with WebDAV, also captured vividly in the above article -- any particular server/client combination feels no obligation to try and act like some "standards" prescribe, or make use of facilities available.
I might be wrong, but when I last mounted webdav from windows, it did the same dumb thing too.
Actually - I believe - within Windows 11 - the "WebClient" service is now deprecated (which is what - IIRC, actually implements the WebDAV client protocol so that it works with Windows File Explorer, drive mappings, etc.)...
Played around with WebDAV alot... a long time ago... (Exchange Webstore/Webstorage System, STS/SharePoint early editions)...
I built a go caldav server and client for my task management app (http://calmtasks.com) and had a similar experience, which surprised me. Go generally has at least one good, working, and well documented implementation for all standard protocols.
Apple calendar supports caldav but in a way not specified in the spec. I basically had to send requests and responses to figure out how it works. I would be willing to open source my server and client (alot of which was built using/on top of existing libraries) if there is interest.
I once implemented a WebDAV server in PHP. The standard isn't that bad and clients are more or less following the standard. It's still horrible how they are doing that. I saw behaviors when opening a single file like:
- does / exists?
- does /path/to exists?
- does /path/to/file exists?
- create a new file /path/to/file.lock
- does /path/to/file.lock exist?
- does / exist?
- does /path/to/file exists?
- lock /path/to/file
- get content of /path/to/file
- unlock /path/to/file
- does /path/to/file.lock exist?
- remove /path/to/file.lock
(if not exactly like that it was at least very close, that was either Finder on OS X or Explorer on Windows).
Without some good caching mechanism it's hard to handle all of the load when you get multiple users.
Also the overwrite option was never used. You'd expect a client to copy a file, get and error if the target exists, ask user if it's ok, send same copy with overwrite flag set to true. In reality clients are doing all steps manually and delete the target before copying.
It was satisfying seeing it work at the end, but you really need to test all the clients in addition to just implementing the standard.
Articles like this shitting on WebDAV really rubs me the wrong way as I've seen first hand discussion that goes like: "internet say WebDAV is hell, what's the better alternative? S3 or course!" And now every cloud provider instead of providing a webdav interface provide an S3 one and it's worse by every possible way, you can't rename a file / folder because S3 does not support that, you can't support a classic username / password authentication mode but are force to use an uggly access_key_id and secret_access_key, can't bash your way around with a simple curl command to do anything because generating the signature requires a proper programming language and you have to trust Amazon to do the right thing instead of going through the RFC process except they've already shown a few months ago their complete lack of care for any s3 compliant server by introducing a breaking change that literally broke the entire ecosystem of "S3 compliant" implementations overnight and without any prior warning.
I hope WebDAV had a better reputation, it carries the original promise of s3 of being actually simple but S3 won the war with evangelism. I would much have preferred a world where new version of the webdav protocol are made to address the quirks exactly like what happened with protocols like http, oauth, ...
Postel's Law strikes again. What's the point of having RFCs with MUST and SHOULD if everyone does what they need? You end up with French cafe[0] implementations.
> Ah, looks like it was somewhat superseded by RFC 4918, but we’re not going to tell you which parts! How about those extension RFCs? There’s only 7 of them…
This is a major complaint I have with RFCs.
If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.
There are no less than 11 RFCs for HTTP (including versions 2 and 3)
I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.
Is this true anymore? AFAIK, I've seen "Updated by" (rfc2119), "Obsoleted by" (rfc3501), but that might changed afterwards https://stackoverflow.com/a/39714048
When working on pimsync[1] and the underlying WebDAV/CalDAV/CardDAV implementation in libdav, I wrote "live tests" early on. These are integration tests, which use real servers (radicale, xandikos, nextcloud, cyrus, etc). They do things like "create an event, update the event, fetch it, validate it was updated". Some test handle exotic encoding edge cases, or trying to modify something this a bogus "If-Match" header. All these tests were extremely useful to validate the actual behaviour, in great deal because the RFCs are pretty complex and easy to misinterpret. For anyone working on the field, I strong suggest having extensive and easy to execute integration tests with multiple servers (or clients).
All servers have quirks, so each test is marked as "fails on xandikos" or "fails on nextcloud". There's a single test which fails on all the test servers (related to encoding). Trying to figure out why this test failed drove me absolute crazy, until I finally understood that all implementations were broken in the same subtle way. Even excluding that particular test, all server fail at least one other test. So each server is broken in some subtle way. Typically edge-cases, of course.
By far, however, the worst offender is Apple's implementation. It seems that their CalDAV server has a sort of "eventual consistency" model: you can create a calendar, and then query the list of calendars… and the response indicates that the calendar doesn't exist! It usually takes a few seconds for calendars to show up, but this makes automated testing an absolute nightmare.
I once implemented JavaScript's new async-for in plain Objective-C for a WebDAV app that I wrote for a client, about 15 years ago. I was so much smarter back then than I am now. Does this happen to everyone? You just go downhill? Anyway I'm sure there were complex edge cases of WebDAV that I missed, but it worked really well in all my tests, and my client never complained about it.
For myself I don't think I was smarter before, I just paid less attention to what I was doing. I didn't know about all the edge cases. I hadn't built it before so I massively underestimated how much work it would be to get done. This makes it much easier to start.
What I did before with ignorance, I now do with experience. For projects which support it, I write tests first. Find the edge cases and figure out what I'm going to skip. I will know the scope of my project before I start it.
With solid tests in place, my productivity and confidence soars. And the implementation doesn't result in as many bugfixes than they didn't in the past.
This kind of improvement is hard to notice. You're looking at the end result of your previous work and your memory of working on it will be incomplete. Instead you're looking at what it would take for you to implement it now.
On top of all of this, do you have more responsibilities or think through your actions more than you did before? This sucks time and mental bandwidth. You have less opportunity to use your intelligence.
I had the same feeling before about a story I wrote. The stars aligned for me to write something truly excellent. For years I thought that it would be my best work. I've never been so relieved to hate something. I will always be proud of it but I no longer think it's the best I can do.
Actually done some WebDAV, did a small client (talking to Apache) from JS that worked well enough for my purposes.
The nasty surprise was doing the server-side (for a hobby-project), many layers. Luckily found out that something called DavTest exists (it's included with Debian) so testing most basic things wasn't too bad.
Then tried mounting from Windows and running into a bunch of small issues (iirc you need to support locking), got it to mount before noticing notes about a 50mb file-size limit by default (raisable.. but yeah).
It's a shame it's all such a fragmented hodge-podge because adding SMB (the only other "universal" protocol) to an application server is just way too much complexity.
I’m pretty sure there is a complete webdav implementation which ships with golang in the stdlib… Why do you need an external library for this? You just need to wrap it in a main.go and boom, webdav server.
It's in the extended standard library at https://pkg.go.dev/golang.org/x/net/webdav . Whether or not it would meet their needs, they'd have to tell us. I don't think they told us enough to evaluate that ourselves, and even if they did doing even a quick job is probably at least an hour's careful reading and comparing and that's past my budget for an HN post. And they're not obligated to give us an absolutely complete accounting of everything they considered. That just generates other complaints.
> Now before you mention NIH syndrome, yes, we looked at the existing Go implementation, go-webdav. This library was lacking some key features we needed, like server-side collection synchronization, and the interfaces didn’t really align with our data model. This is also going to be a key feature of our product, so we should have some level of ownership for what gets implemented.
On not implementing half the RFCs, this is almost always true... some parts of "standards" are impractical or very difficult to implement, have no practical use or just aren't needed for your use case.
I created a test LMS in 2003 based on SCORM, at the time there really wasn't a good server for the standard... The main point was to be able to test the content which the company was hired to generate. I didn't implement several points of functionality that I just didn't need, didn't care about, and would have been difficult to implement.
That testing LMS turned into an actual product used by several aerospace companies (a few F100's, etc) and was in production for over 15 years. I remain friends with the company owner... It was about 12 years later than someone had an actual course that used one of the features that wasn't implemented... and even then, they only implemented it the half way needed, because it would have been difficult to do the rest.
> reverse engineering existing clients and servers by inspecting their requests and responses.
What a strange process... why not read the source code of an open source working library (easy to test, run a client made by someone else on its server, and vice versa) in a language close to the target?
Why not use then those tests as a way to verify your own work after?
FWIW I'm using WebDAV, both with clients and with my own self hosted servers, on a daily basis and... it works.
Author here, we wanted a clean room implementation and our own e2e test suite. There are some conformance tooling (like Apples calendar test suite) that we partially used (it's... very comprehensive), but otherwise we wanted to validate our library against existing implementations (manually, for the most part) and then write tests against our own implementation (for the interfaces, mostly to prevent regressions). We created a little CLI tool ("validav") that can spin up a mock server or expose the client interfaces to help with manual testing.
One niceish thing about WebDAV/CalDAV is it's pretty set in stone for now.
Author here, we don't use generative AI for software development. We've been building since 2018, and our number one goal has always been ensuring our software remains maintainable.
Did you use the 'litmus' test suite? I found it very useful when building Fastmail's (perl) WebDAV file server implementation.
There were also a bunch of fun things with quirks around unicode filename handling which made me sad (that was just a matter of testing against a ton of clients).
As for CalDAV and CardDAV - as others have said, JMAP Calendars/Contacts will make building clients a lot easier eventually... but yeah. My implementation of syncing as a client now is to look for sync-collection and fall back to collecting etags to know which URLs to fetch. Either way, sync-collection ALSO gives a set of URLs and then I multi-get those in batches; meaning both the primary and fallback codepath revert to the multi-get (or even individual GETs).
I've tried that (with Sonnet 4.5 at least, not Opus) and Claude isn't good at code analysis because it's too lazy. It just grepped for a few things and then made the rest of it up.
I think the issue is mostly that it desperately tries to avoid filling its context window, and Anthropic writes system prompts that are so long it's practically already full from the start.
A good harness to read code for you and write a report on it would certainly be interesting.
Those two things aren’t mutually exclusive. It may be worthwhile to at least have Claude (or whatever LLM you favor) to look at the other libraries and compare it to yours. It doesn’t have to write the code, but it could point out areas/features you’re missing.
We know what we're missing (a lot, we didn't implement the full spec). We don't know what weird edge cases the clients/servers will have, and I would bet you decent money a LLM won't either. That's why manual testing and validation is so important to us.
I wouldn’t be so sure about the LLM not helping. The LLM doesn’t need to know about the edge cases itself. Instead, you’d be relying on other client implementations knowing about the edge cases and the LLM finding the info in those code bases. Those other implementations have probably been through similar test cycles, so using an LLM to compare those implementations to yours isn’t a bad option.
Golang is one of the only languages with a more or less working library. I built with it and using some hacks got it hooked up to AWS API Gateway with Lambda. Reading the room, the lack of language support does make it pretty suspect in 2026, even if the client support is still pretty good. Recently I have abandoned in favor of AWS Mountpoint (rust S3 mounting) and combined with Lambda object get and list functions have achieved most of the same functionality. The downside being that you lose the ability to talk to the many varied clients like an old HP printer which (obviously) can't use FUSE.
YMMV and a lot of people hate it, but I've run Nextcloud for this for years. It has pretty comprehensive support for WebDAV and CalDAV. Has sharing and lots of different authentication options; I use OIDC with PocketID.
It used to be a constant headache to keep running, but ever since I switched to the TrueNAS/Docker plugin it has worked smoothly. I know a lot of other people also have had good luck with the much lighter Radicale if CalDAV is your primary concern.
> It used to be a constant headache to keep running
It’s been very easy to run for me since version 15 or something. Basically i just use the stock docker image and mount a few files over there. The data folders are bind-mounded directories.
As usual with anything php, it’s only a mess if you start managing php files and folders yourself. Php has a special capability of making these kind of things messy, i don’t know why.
I have some patches saved up coming your way once I leave my current employer, who doesn't allow external open source contributions. Though that is not for a few years probably.
Love the libraries BTW. Thank you for all of your hard work.
I honestly didn't know WevDAV was still a thing. It seems like a nightmare from which we should have woken up long ago. I do sympathize though. Every service implements (or implemented) it differently, or spottily, or partially. Is it another example of the adage that "even your bugs have users"?
Why is it a nightmare? I use WebDAV almost every day. All my online servers support it, and it makes it super easy for me to map them as a file system driver in Windows, allowing me to edit files directly on the server.
On a different point, I don't think the author's point about having to "also" inspect the headers is a fair critique of DAV - HTTP headers are to indicate a certain portion of the request/response, and the body a different one. I wish it was simpler, but I think it's an acceptable round peg in a round hole use of the tools.
I heard DeltaV is very advanced, and Subversion supported it. I'm afraid to ask.
Overall, this has worked great for me, but it did take me a while before I set it up correctly. Now I have a cache of files I use, and the rest of the stuff that I just keep there for backup or hogging purposes doesn't take disk space and stays in the cloud until I sync it.
Realistically speaking, most files I have in my cloud are read-only. The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.
Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.
But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.
P.S. If anyone wants to set this up, here's my nixos config for the service, feel free to comment on it:
own^H^H^Hnextcloud
or
own^Wnextcloud
You might wanna look into OpenCloud (formerly known as nextcloud-go) [1]. I still use Nextcloud for the uploading of files and the calendar (though I may switch the latter), but I now sync the dir with Immich. Performance-wise a relief. I also swapped Airsonic Advanced (Java) with Navidrome (Go). Same story.
[1] https://github.com/opencloud-eu/opencloud
Do you use this for anything other than photos and videos?
https://www.thehacker.recipes/ad/movement/mitm-and-coerced-a...
Mounting a directory through nfs, smb or ssh and files are downloaded in full before program access them. What you mean? Listing a directory or accessing file properties, like size for example do not need full download.
On a second thought, I think you are looking at webdav as sysadmins not as developers. Webdav was designed for document authoring, and you cannot author a document, version it, merge other authors changes, track changes without fully controlling resources. Conceptually is much like git needs a local copy.
I can't imagine how to have an editor editing a file and file is changed at any offset at any time by any unknown agent whitouth any type of orchestration.
The parent comment was stating that if you use the open(2) system call on a WebDAV mounted filesystem, which doesn't perform any read operation, the entire file will be downloaded locally before that system call completes. This is not true for NFS which has more granular access patterns using the READ operation (e.g., READ3) and file locking operations.
It may be the case that you're using an application that isn't LibreOffice on files that aren't as small as documents -- for example if you wanted to watch a video via a remote filesystem. If that filesystem is WebDAV (davfs2) then before the first piece of metadata can be displayed the entire file would be downloaded locally, versus if it was NFS each 4KiB (or whatever your block size is) chunk would be fetched independently.
But many others clients won't. In particular, any video player will _not_ download entire file before accessing it. And for images, many viewers start showing image before whole thing is downloaded. And to look at zip files, you don't need the whole thing - just index at the end. And for music, you stream data...
Requiring that file is "downloaded in full before program access them" is a pretty bad degradation in a lot of cases. I've used smb and nfs and sshfs and they all let you read any range of file, and start giving the data immediately, even before the full download.
I might be wrong, but when I last mounted webdav from windows, it did the same dumb thing too.
Thank you!!!!
Played around with WebDAV alot... a long time ago... (Exchange Webstore/Webstorage System, STS/SharePoint early editions)...
Apple calendar supports caldav but in a way not specified in the spec. I basically had to send requests and responses to figure out how it works. I would be willing to open source my server and client (alot of which was built using/on top of existing libraries) if there is interest.
Also, would be nice to add some screenshots of the web UI.
Looks like a nice little app!
Also the overwrite option was never used. You'd expect a client to copy a file, get and error if the target exists, ask user if it's ok, send same copy with overwrite flag set to true. In reality clients are doing all steps manually and delete the target before copying.
It was satisfying seeing it work at the end, but you really need to test all the clients in addition to just implementing the standard.
I hope WebDAV had a better reputation, it carries the original promise of s3 of being actually simple but S3 won the war with evangelism. I would much have preferred a world where new version of the webdav protocol are made to address the quirks exactly like what happened with protocols like http, oauth, ...
[0] https://www.samba.org/ftp/tridge/misc/french_cafe.txt
The author's mention of a lawsuit for not following an RFC is insane.
This is a major complaint I have with RFCs.
If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.
There are no less than 11 RFCs for HTTP (including versions 2 and 3)
I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.
All servers have quirks, so each test is marked as "fails on xandikos" or "fails on nextcloud". There's a single test which fails on all the test servers (related to encoding). Trying to figure out why this test failed drove me absolute crazy, until I finally understood that all implementations were broken in the same subtle way. Even excluding that particular test, all server fail at least one other test. So each server is broken in some subtle way. Typically edge-cases, of course.
By far, however, the worst offender is Apple's implementation. It seems that their CalDAV server has a sort of "eventual consistency" model: you can create a calendar, and then query the list of calendars… and the response indicates that the calendar doesn't exist! It usually takes a few seconds for calendars to show up, but this makes automated testing an absolute nightmare.
[1]: https://pimsync.whynothugo.nl/
What I did before with ignorance, I now do with experience. For projects which support it, I write tests first. Find the edge cases and figure out what I'm going to skip. I will know the scope of my project before I start it.
With solid tests in place, my productivity and confidence soars. And the implementation doesn't result in as many bugfixes than they didn't in the past.
This kind of improvement is hard to notice. You're looking at the end result of your previous work and your memory of working on it will be incomplete. Instead you're looking at what it would take for you to implement it now.
On top of all of this, do you have more responsibilities or think through your actions more than you did before? This sucks time and mental bandwidth. You have less opportunity to use your intelligence.
I had the same feeling before about a story I wrote. The stars aligned for me to write something truly excellent. For years I thought that it would be my best work. I've never been so relieved to hate something. I will always be proud of it but I no longer think it's the best I can do.
The nasty surprise was doing the server-side (for a hobby-project), many layers. Luckily found out that something called DavTest exists (it's included with Debian) so testing most basic things wasn't too bad.
Then tried mounting from Windows and running into a bunch of small issues (iirc you need to support locking), got it to mount before noticing notes about a 50mb file-size limit by default (raisable.. but yeah).
It's a shame it's all such a fragmented hodge-podge because adding SMB (the only other "universal" protocol) to an application server is just way too much complexity.
> Now before you mention NIH syndrome, yes, we looked at the existing Go implementation, go-webdav. This library was lacking some key features we needed, like server-side collection synchronization, and the interfaces didn’t really align with our data model. This is also going to be a key feature of our product, so we should have some level of ownership for what gets implemented.
This is a different, non x/net library.
> You just need to wrap it in a main.go and boom, webdav server.
Lol
I created a test LMS in 2003 based on SCORM, at the time there really wasn't a good server for the standard... The main point was to be able to test the content which the company was hired to generate. I didn't implement several points of functionality that I just didn't need, didn't care about, and would have been difficult to implement.
That testing LMS turned into an actual product used by several aerospace companies (a few F100's, etc) and was in production for over 15 years. I remain friends with the company owner... It was about 12 years later than someone had an actual course that used one of the features that wasn't implemented... and even then, they only implemented it the half way needed, because it would have been difficult to do the rest.
What a strange process... why not read the source code of an open source working library (easy to test, run a client made by someone else on its server, and vice versa) in a language close to the target?
Why not use then those tests as a way to verify your own work after?
FWIW I'm using WebDAV, both with clients and with my own self hosted servers, on a daily basis and... it works.
One niceish thing about WebDAV/CalDAV is it's pretty set in stone for now.
Why not download the most popular DAV libraries from various languages, Java, C++, PHP, etc. Regardless how ancient they are.
And then have AI like Claude to analyze and bring in the improvements to your own Go library?
I was doing something like that for Kerberos and Iceberg Rest Catalog API, until I got distracted and moved on to other things.
There were also a bunch of fun things with quirks around unicode filename handling which made me sad (that was just a matter of testing against a ton of clients).
As for CalDAV and CardDAV - as others have said, JMAP Calendars/Contacts will make building clients a lot easier eventually... but yeah. My implementation of syncing as a client now is to look for sync-collection and fall back to collecting etags to know which URLs to fetch. Either way, sync-collection ALSO gives a set of URLs and then I multi-get those in batches; meaning both the primary and fallback codepath revert to the multi-get (or even individual GETs).
You don't have to use it to directly write code. You can use it just for the analysis phase, not making any changes.
I think the issue is mostly that it desperately tries to avoid filling its context window, and Anthropic writes system prompts that are so long it's practically already full from the start.
A good harness to read code for you and write a report on it would certainly be interesting.
How close to retirement are you?
I picked it because it's in a language I know (Python) and free and copyleft. These days I don't contribute to anything unless it's copyleft.
No idea if it supports family calendar, I need to look into that as well at some point.
EDIT Just checked and supports auth, yes.
It used to be a constant headache to keep running, but ever since I switched to the TrueNAS/Docker plugin it has worked smoothly. I know a lot of other people also have had good luck with the much lighter Radicale if CalDAV is your primary concern.
It’s been very easy to run for me since version 15 or something. Basically i just use the stock docker image and mount a few files over there. The data folders are bind-mounded directories.
As usual with anything php, it’s only a mess if you start managing php files and folders yourself. Php has a special capability of making these kind of things messy, i don’t know why.
Yes! This is my #1 issue with the library as well.
Love the libraries BTW. Thank you for all of your hard work.
If there's actual employer IP in there then just leaving said employer wouldn't magically clear it.
If there isn't and you're just trying to avoid red tape, then publishing it anonymously would work around the issue.
https://github.com/lookfirst/sardine
[0] https://jmap.io/spec.html