> One of the unfortunate facts about open source operating systems like Linux is that specialized hardware support sometimes lags behind commercial operating systems like Windows and MacOS.
Well that's just because the manufacturer just doesn't write one, not because of some intrinsic factor of FOSS. That's an odd point to make when it's the manufacturers' fault.
I wonder if it might be time for some legislation around this. Manufacturers are already required to include operating instructions with products they release, so why not protocol specs? They don't need to release anything secret, just the instructions for software to interact with their hardware.
This is an interesting point. While this would be nice for some folks that just want drivers to be functional I could also see why a manufacturer might be reluctant to offer such information. One reason they could bring up is that it may put undue burden on them to document and provide such information. Another possible issue a manufacturer may have with this is that it offers a more direct route for abuse of their software/hardware (maybe?). Or in some cases it may be a liability to offer such information (if there is a potential for misuse)
Edit: or maybe they don’t want to release that kind of info for product/brand lock-in reasons?
> it may put undue burden on them to document and provide such information.
How can documenting something be an undue burden? In order to write the drivers that they do write, for say Windows, they must have written some documentation.
And if I want to abuse some piece of hardware, such as making a musical instrument out of a pile of floppy disk drives, what business is that of the manufacturer?
The documentation is not the burden as much as the requirement, or at least expectation, to adhere to the published specs once they are published. By not releasing any info they are not burdened with having to stay consistent on driver details from one product or version to the next.
I called my ebike company and apparently my year and make could have one of three different batteries. Seems infuriating, but hardware manufacturers don't have to be consistent I guess, just release some new/updated docs?
I wonder if it has to do with any underlying 3rd-party components. Otherwise, yeah, makes no sense.
It might be worth clarifying that Wacom is not taking that approach. But their statement there suggests that the underlying reason for lack of hardware support in Linux is the OpenSource nature of the software.
Well one way that comes to mind for why it could be considered a burden is breaking down internal documentation into public and private, and making sure that the private one is never exposed. As much as we don’t want to imagine business constraints on some manufacturers, they could always come up with reasons.
> And if I want to abuse some piece of hardware, such as making a musical instrument out of a pile of floppy disk drives, what business is that of the manufacturer?
As an example (maybe not a fair one) we can ask Tesla why they don’t want to expose some/all APIs that Tesla owners might want access to. I expect they would bring up a thousand reasons why, maybe safety related, maybe not.
I generally agree with you though, you paid and own something, and you wish to use it as you see fit. Unfortunately it seems like what we think we own and what we actually own are in some cases two separate things.
Some documentation, but when the guy who wrote the hardware is down the hall from the guy writing the documentation it typically isn't very good, saving a lot of effort
Some vendors might also be bundling spyware with their drivers. In that case, releasing hardware info would take the (unjust) power over the users away from them.
> This is an interesting point. While this would be nice for some folks that just want drivers to be functional I could also see why a manufacturer might be reluctant to offer such information.
Well, yeah, the point of regulations is to make companies do things that are not in their direct self-interest, but benefit wider society.
Manufacturers would also be reluctant to comply with planned obsolescence bans, or consumer safety regulations. That's why the regulations must have teeth to them.
They already have the documents. But releasing them would make it easier for a competitor to make a cheaper drop in replacement. It would be a win for consumers though.
Please no. Get the government and their violence out. No one has an inalienable right to APIs and such. You have the right to buy what you want. If the market is not serving you, then look in the mirror and pick your priorities.
The right compromise here is that if the government is going to "protect" the company from copyright infringement, and reverse engineering, and enforce the companies' patents, then the government also has the moral right to regulate the company for the benefit of the people.
Comments like yours seem to advocate a very beneficial situation for the companies where they have all protections, all authority, without any responsibilities.
If your serious about getting the government out, including out of copyright and patents, then let's talk. Until then, let's regulate these companies.
> If your serious about getting the government out, including out of copyright and patents, then let's talk. Until then, let's regulate these companies.
I'm all for eliminating all government privileges and cronyism (which are really rights violations of the competition). In this world, there is no way we're going to get a grand bargain where all things will simultaneously change in that way. For each proposed change, ask whether we increase or decrease natural rights (especially to life and property). "Intellectual" property is a creation wholly new in human history before a few hundred years ago. Property in land and goods goes back to evolutionary times: it's basic territoriality, and respecting that keeps the peace. Free trade is the way forward to widespread prosperity.
Wouldn't being required to release hardware specs support that healthy competition though? I'm sure the market will eventually settle on the better product either way, but it will happen faster if systems are documented and interoperable enough that people can easily switch to a better competitor. Companies withholding data that would let consumers make better decisions, or deliberately making their systems incompatible, is anti-competitive and wasteful.
Of course, people having control over the hardware and software that rule their lives is a net increase in natural rights.
There is absolutely no natural justification for "intellectual property", since this travesty runs absolutely contrary to actual property -- physical property cannot be freely duplicated, whereas intellectual property in most cases cannot be used without being duplicated in some form -- its knowledge is generally enough to do so.
All this to enforce laws that terribly hinder free-market, while being abusively enforced by the violence of government? Something smells fishy about this argument...
> For each proposed change, ask whether we increase or decrease natural rights (especially to life and property).
I asked myself and answered: Requiring companies to document their hardware for right-to-repair or right-to-utilize reasons has no effect on life, and some effect on property. It would give individuals more power over their own property - more power over their old school, physical possession, basic territoriality, property. The government protecting people's right to repair and utilize their physical possessions seems like a good thing.
At this point you seem, to me, to have reversed position. Are you now okay with requiring companies to document their APIs for the benefit of consumers (in some cases at least)?
Either way, this conversation is frustrating to me, and one I wont continue here (you're, of course, welcome to respond and debate with others). I'm frustrated because the "no regulation, no government" view hides a lot of nuance and shuts down conversations. It's so easy to throw out "no government" and so hard to talk past it, and I think this is one reason libertarian views like this often cause people to just roll their eyes and ignore. I'm sympathetic to a lot of libertarian ideas, but I wish people would give more acknowledgement to why people want regulation (in this case) before a drive-by "no government please" comment.
Two wrongs don't make a right. Please don't use one travesty to justify another.
Instead answer to yourself, why I should have any right to dictate the standards of the goods you may knowingly and willingly purchase, and particularly when the consequences of your substandard purchase are born by you? Should I literally be able to force you to upgrade to a product with iFixit-approved repairability? Under what moral code is that legitimate? Protect people from themselves, i.e., I may treat you as my child?
> why I should have any right to dictate the standards of the goods you may knowingly and willingly purchase, and particularly when the consequences of your substandard purchase are born by you?
They are not born by me, as I personally have a very low influence on the market. Market is dictated by the masses, and masses are manipulated (nudged) by media, which is again controlled by the corporations.
Your argument of blaming individuals for bad choices masses make is deeply flawed and skewed in favor of corporations.
Because we live in a society, and our choices affect each other, and we have democratic systems in place to collectively decide on things? You'd have a point if you alone were deciding these standards as a dictator-for-life, but that's not how it works.
Markets by themselves tend to explore only spaces where large parts of the population take them. Because of limited information, most of us often prefer clear short term benefits over long term uncertain and often unclear benefits. Legislation is a good tool to, at least temporarely, force markets to explore parts of the state space they are extremely unlikely to explore by themselves.
The hardware manufacturer does not want to make it any easier to compete with a similar product either. If you have the protocol, you make it easier to create a drop-in replacement without bothering with any drivers.
This is why good customer support and warranties are as important as the product itself. It doesn't matter how big the company is either. It's why people pay more for certain brands, especially in the enterprise and professional creative markets.
They are conflating two different issues: Linux' lack of stable binary interface for drivers, and Linux being open source. It's the lack of stable ABI that's preventing third parties from maintaining decent drivers for Linux (plus the lack of economic incentives of course).
You are conflating two issues as well: the manufacturers' insistence on closed-source drivers and Linux' non-stable (binary!) ABI. If the manufacturers were to upstream their drivers in the Linux kernel, they wouldn't need to update their drivers for every ABI change.
It's often not easy or even possible to simply open-source the drivers. What if there are any patents at play? How about third (fourth?) party dependencies? I'm sure the legal would take major offence with slapping GPL on the code and releasing it.
I can also imagine a company like Wacom fighting cheap Chinese copies - I can't imagine persuading the business folks to make the copycat's life way easier by open sourcing the logic.
As long as Linux says it's either mainline or the highway, well, companies will be reluctant to put in the resources I'm afraid.
Nevertheless, any trade secrets which are needed in the interface that is necessary for a device to be used by its owner, can easily be reverse engineered by someone with enough money, i.e. by all the important competitors.
So hoping that not providing documentation for the interface of your devices is a method to fight your powerful competitors is an illusion.
This method works only against your normal customers, by preventing them to find reasons to buy more of your products, for applications that you do not support, e.g. use under other operating systems.
Trade secrets are useful and they may be effectively protected when they do not concern the normal operation of a product, e.g. when they are used during the fabrication of a product.
Avoiding patent infringement claims is probably a significant motivation to stay closed source. There are just too many patents on obvious solutions any engineer would come up with.
But you don't always have the right to use them, or share them.
If company A owns a patent, and company B buys a license, company B may not have the rights to release their code to GPL and share it into mainline linux.
Hm, interesting, I hadn't thought of that. I know that Linux is fine mainlining drivers and updating them whenever the ABI changes, but what's the rationale behind not offering a stable ABI?
There's been various explanations and rationales provided over the years, but from Linus' own words in 1999 (see https://lwn.net/Articles/159313/):
Basically, I want people to know that when they use binary-only modules, it's THEIR problem. I want people to know that in their bones, and I want it shouted out from the rooftops. I want people to wake up in a cold sweat every once in a while if they use binary-only modules.
The less activist stance is simply that maintaining a stable ABI requires either accumulating lots of backwards compatibility layers over time, or to stop improving certain parts of the kernel. Neither option has won out over the current situation.
I know nothing about this topic. Is there a reasonable work around to this? To provide something stable for hardware manufacturers but allowing the current kernel practices to continue?
I always thought they insisted on breaking ABI specifically to incentivise open-sourcing and mainlining drivers. If given a stable ABI, manufacturers have shown (as seen on Windows) that they will just release a binary driver once and never touch it again. This is a big reason why old hardware support is so much better on Linux...for the hardware that was supported in the first place of course.
Well, my old laptop (HP nc8430 from 2006) graphic card worked perfectly up to some 3.x kernel, then it lost all sync with 3.y kernels (y > x). It was an ATI X1600. There were no more recent drivers so my take is that the kernel broke it and ATI didn't spend money to make their driver compatible with newer kernels. Actually AMD because they bought ATI in 2006. I downgraded the kernel and kept using the laptop for a while. BTW, the open source driver didn't work well.
Than I had to buy a new one in 2014. I told myself, no more ATI/AMD, let's buy one with an NVIDIA card. That card is a Quadro K1100M in a HP ZBook 15. The latest binary driver supporting that card is version 418. The newest driver is 510. Apparently the next Ubuntu 22.04 is shipping with kernel 5.15 and there is a 418 package for that kernel. I wonder what the latest compatible kernel will be. Is my laptop going to die of old age before I have to replace it because I won't be able to use its graphic card anymore? BTW, the open source driver doesn't work well. I don't check it again every year. All I remember is that I couldn't work with that the last time I tried.
On the other side, my USB scanner from the 90s still works perfectly well. Same for my mouse (I don't plugin it in every year) and any disk / pendrive I care to use.
I have a ThinkPad with a Quadro 1000M from 2011. The last Nvidia driver is 390. It still works on Arch with latest kernel somehow. I doubt it has much support left as pre-2010 GPUs have been abandoned. So I wouldn't be surprised if you have a few years left + time spent on an LTS with older kernel. I recently switched to Guix and have been using Nouveau there and it's fine but I also don't need my laptop to do much beyond display windows and play the occasional video.
Nowadays and into the future, your best bet will be AMD, since their AMDGPU driver is open source, and will therefore be kept up to date without any reliance on the company.
I have maintained for some time a Linux device driver a couple of years ago.
It was not official maintenance, it was just an unmaintained open-source device driver found somewhere, which was needed for my hardware. I had to update it after every Linux kernel release in order to be able to continue to use it.
What was annoying was not the fact that almost every kernel release required modifications in the device driver, but the lack of a documentation about the kernel changes that is usable by someone who does not follow every day the kernel mailing lists.
The 3 main reasons which broke the device driver after each new kernel release were:
1. Some reorganization of the kernel header files, which moved some definitions to other header files, which were not included in the previous driver code.
2. Some structures had members added or deleted
3. Some functions had parameters added or deleted
Point 1 was easily solved by a search through the entire Linux source tree.
When structure members or function parameters were deleted, it could be hoped that it is enough to also delete them in the device driver, even if in some cases some earlier-executed initialization code had to be modified to make everything work like before.
The worst was when there were new structure members or function parameters, as there was no way to guess which values should be put in them.
Because, at least then, but I suppose that nothing has changed, there was no centralized document with the changes that need to be applied to drivers, the only way was to search the kernel mailing lists, to discover where was the patch that changed that structure or function.
That search was not too difficult, but I have never seen a message with the changes that will also explain what values must be put in the new members/parameters.
So after finding who did the changes, the mailing lists had to be searched for messages with the same authors or similar subjects to find any relevant information.
In many cases the search was too long, so it could be simpler to pass to the final solution of anything, i.e. to reading the source code of various kernel subsystems or of other device drivers, where the offending structures or functions were also used, to discover what values might be expected.
None of these activities was too difficult, but they were quite time-consuming.
What I would have expected is that anyone who makes a kernel change like that, would also write a short migration guide, saying e.g. that whoever used the previous function with 5 parameters now has to put the X value in the 6th parameter to obtain the previous behavior.
With the right documentation of the kernel changes, I could have updated the device driver in a couple of minutes every time, but, unfortunately, I have never seen such documentation.
(The "ChangeLog" of the kernel is usually completely irrelevant for device driver maintenance, because it does not list the symbols affected by changes, so when your device driver compilation stops on function or structure XYZ, you cannot search it in the ChangeLog to discover which is the change that affected it.)
agreed with your points... as a user of the current ath9k wireless problems, code changes to address security issues completely broke the wireless - for over two kernel releases until people alarmingly realized that noone was coming to help, code changers not even aware it was broken nor checked; that the interfaces were abstract, misunderstood, that even Linus broke in and said that such breakage, regression violated the system, and decided the patches be reverted. Point is, little to no documentation, and in my opinion too much fiat, with shallow testing and confirmation, and only when people howled was noticed. And this is the second time for me where I once spent days bisecting the kernel* over a year or two of changes, finally getting the chip manufacturers involved - ooops, sorry, was the response. *I realize that is my cost as a free OS user, but still, I'm not a developer and they shouldn't be pulling the rug out under us.
Imagine if you could resolve such conflicts in a few minutes. The code editor should also be a collaboration tool.
Using an online editor, Mark a struct for changes…call a merge meeting with all affected module owners and everyone changes their code right there and then.
Even using email to talk about code is stupid and an unnecessary hurdle.
No need for an ABI with open-source drivers because Linux and drivers shall be open source. What matters is the API and modifications to this can be maintained by upstream. If your hardware drivers aren't open source the promises of the GPL break:
When this model works you get high quality support, performance, over long time. Example:
Sandy Bridge (GPU HD3000) from Intel:
Linux OpenGL 3.3
Windows OpenGL 3.1
MacOS OpenGL 3.2 // I'm not sure!
At time of the hardware availability that didn't matter much but with modern applications and engines it does. If you use Linux and Steam you may benefit from that.
Android "Google/Linux" failed here because Google didn't pushed neither ARM nor Qualcomm. This and separately maintained modifications through manufacturers are the reason that Android devices cannot be trusted and struggle with updates. You cannot bump the Linux kernel if you don't get an updated driver. And you cannot cannot bump the Google part because of the manufacturer patches which weren't upstreamed.
Android is an example of how open-source collaboration should not be done. Even Jolla failed and use closed-source drivers, which meant the need to remain ABI compatible, which meant the cannot upgrade the kernel, which meant the cannot fix issues within BTRFS. Purism avoided that all with much hard work together with NXP! They just need to make the device smaller and cheaper. Sorry Purism. But 800 bucks are even for a nerd/developer-device expensive. Valve did a clever thing with using AMD for the Steamdeck.
Noteworthy:
Linux and GLIBC provide stable APIs and ABIs for the userspace, hopefully also Systemd. External developers and users rely on the promise of API and ABI stability.
I believe USB HID drivers have been in userspace for quite some time, in part precisely to provide a stable interface. I'm not sure why the Wacom drivers are an exception though: https://linuxwacom.github.io/
They don't write one because of the size of the userbase. They end up supporting 99.99%+ of their potential customers by having Windows and MacOS support for something like a drawing tablet; the amount of money to support Linux doesn't justify the potentially low-four digit amount of customers that would buy the product if it had Linux support.
As a Linux power-user and the tech referral of my family, extended family and friends, if something works nicely on Linux I am obviously going to recommend it to everyone. That has to mean something...
It still either has to be a passion project for an engineer or it has to make business sense. Even if it only takes a collective week's worth of work in a year to keep it supported, that's 1/52 of an engineer's salary, and that has to be made up/exceeded by profit generated from the extra sales. Regarding this product, pens and drawing tables for personal computers are already a relatively niche category, so the subsection of those users who also use Linux is probably very small.
It's not necessarily just the hardware spec and the drivers, though. I ran Linux on a Fujitsu convertible tablet/laptop circa 2010, which had a stylus that worked directly on the screen. The Wacom stylus worked great, as long as I was just using the laptop. But when I plugged in an external monitor, linux tried to apply the stylus across the entire combined screen area of the laptop screen and external screen. It was a relatively rare use-case, but wasn't an issue in Windows. I'm not blaming Linux, just wanting to point out that sometimes systematic issues are overlooked.
Not necessarily. Ubuntu LTS kernel is often very far behind and you have to rely on Canonical to backport drivers for you.
e.g. Vulkan on my 5700 XT didn't work (despite AMD being very good at writing kernel drivers!) because the Ubuntu kernel was so far behind. Switching to an up-to-date vanilla kernel and all my problems went away...
"Well that's just because the manufacturer just doesn't write one, not because of some intrinsic factor of FOSS"
Linux wants to force the manufactors to open source their drivers by intentionally changing the ABI, which makes it harder to write a driver as a binary blob once.
I want open source drivers. But I am not sure this is the right way to get them.
Hardware makers feel forced to protect their trade secrets, this is the current buisness reality. Open hardware the rare exception.
Reality also is, I gave up with linux on many machines because of bad drivers. With more stable drivers avaiable, even if proprietary, I would have been able to convince many more people of linux as well as getting rid of windows on my laptops.
And more people using linux on a daily basis, means more people exposed to the idea of software freedom. And not just hackers.
Interesting; can you or someone please elaborate? I thought that device-specific drivers wouldn't be needed for the vast majority of audio interfaces, because they're connected via USB and all they need to do to ‘just work’ upon plugging in is implementing the relevant USB device class.
From my (programmer's) point of view, all I need to know to work with an audio interface is the number of input and output channels and the respective arrays of sample rates and bit depths. In theory, a generic USB driver can handle that. In practice, I observe that my Linux-using colleagues have 99 problems with audio, but none of those is related to plugging in a $1 dongle from AliExpress. (Therefore, I used to think that those audio interfaces ‘just work’, but now I'm curious about what I've been missing all along.)
Why is releasing Linux drivers for an audio interface even a thing?
That's true for the cheap dongles. Where it gets problematic are the high end "Audio Interfaces" for studio recording work. The USB Audio Class 1 standard was released in 1998 and supports 2 channels at 24/96, over USB 1.0. This is obviously anemic for all but the most basic work. The upgraded USB Audio Class 2 standard was released in 2009, but was unsupported in Microsoft Windows until an update to Windows 10 brought it in 2017. Therefore, prior to that, any manufacturer who wanted to support Windows were put in the position of writing their own drivers for anything with more than 2 channels.
Aha, that's a good point! It was silly of me to assume that if any cheap dongle works, that should mean ‘proper’ interfaces would work at least as well.
However, it seems like that explains why we needed Windows drivers, but not why device-specific Linux drivers are a thing? Unless the implication is that manufacturers who needed to make custom drivers anyway didn't bother to make their devices class compliant?
In many cases it's not necessary though. Most "pro" audio interfaces support MacOS because they are "class compliant" and therefore don't need extra drivers on MacOS. You could also say "no major audio interface company releases MacOS drivers".
Linux should support all these "class compliant" audio interfaces too, but I can't remember if you need extra software (like JACK?) for this.
Cool! I always assumed the Linux driver was maintained as a best effort side project by Wacom engineer(s), not as an official endeavor.
Overall, Wacom tablets work fairly well under Linux and have for quite some time. There are some caveats with what events the devices report vs what apps expect, but I think that’s smoothed out in the modern world of XI2, the wayland tablet api, and libinput.
I’m not a passionate user of graphics tablets, but in my experience, Wacom tablets work better on Linux than Windows (and similarly good to macOS) partly because on Windows, they predate Windows having an input stack that can support pressure-sensitive graphics tablet pens. That arrived in Windows 8. Wacom had instead implemented a sort of defacto standard called Wintab, which they seemingly standardized with a company called LCS/Telegraphics in the Win 3.x days. In the modern invocation of this, the tablet kernel driver appears to act as a relay for hardware events, but those events get pumped back to a usermode service, and then wintab32 clients get them pumped into their event loops. Modern Windows has WM_POINTER* events, but it seems this usermode service is needed even for those events to work, probably so that wintab and WM_POINTER events can remain consistent, sync with control panel, and not fire over eachother. Still, it’s a shame.
In the modern world, if you happen to have a USB tablet that speaks the HID digitizer API, Linux should be able to give you fairly good support straight out of the box, usually only lacking support for rebinding keys easily, controlling LEDs, etc. More tablet manufacturers have been choosing hardware that can support the HID digitizer class as it’s probably the only way to get graphics tablet support for Android devices. I found this to be the case with an XP-Pen device that I bought with intent to try writing a driver, only to find it spoke perfectly good standard USB, produced events in evdev, and plumbed through to Krita absolutely fine. To be sure, not all XP-Pen tablets will work this way, but some will. I’d guess ones that advertise Android support are likely to.
Some 15 years ago I remember there was already a Wacom engineer consistently working on the Linux drivers when there were practically zero applications able to use pressure input. Their kernel driver was always being maintained. The then-upcoming competition didn't bother with Linux. Not sure how it is today.
Just for this I would buy from Wacom again, except that I already have three tablets collecting dust from my lack of creative work. Just plugged in the oldest one, yep it still works on modern Linux.
It's super rare to see a company actually blog about how good their Linux support is. But digital artists overlap with Linux users enough that yeah, Linux support for digitizers is a big deal!
Indeed: Most of the artists at the high-end VFX studios (Weta, ILM, Framestore, MPC, DNEG, etc) are using Linux workstations and have been for years.
There's still a bit of Windows (for ZBrush), and other more smaller studios use Windows or MacOS more, but Linux support is very important for those larger studios.
Maya (modelling,UVing,layout,anim,rigging), Houdini (everything these days, but simulation/layout strongly), Nuke (compositing), Katana (scene management and rendering/lighting), Mari (texture painting).
Most packages (other than things like ZBrush) have Linux versions. Some packages originally were Linux only (Mari - although it had a Mac version at Weta that was discontinued for a bit and then brought back to life in 2013) and Katana (now has a Windows version).
Most of the commercial production VFX/CG renderers (Renderman, Arnold, VRay) have linux versions, and some of the proprietary ones (Manuka, Hyperion - I think?) are Linux only (only need to be used by the studio themselves).
Is there anywhere one could see an example of these tools being used together in and end-to-end workflow to get a quick sense of what VFX tools are capable of these days? It's a total black box to me, and I have no interest in it beyond the sheer curiosity of how far the tools have advanced and how much one can do with those with enough experience and effort.
If you're after what the software is capable of, then looking at the films to some degree is obviously the Litmus test :), but more usefully VFX breakdown videos might be useful or demos of the software described.
I can give you some links to breakdowns the studio I work for worked on, but unless you know the software to some degree, it's probably a bit too fast to really understand exactly what the software is doing, and doesn't really show the software being interactive with or used, but:
Yeah, if you're looking for workflow tutorials with specific tools, it'll be a bit more difficult since a lot of really big shops will roll some of their own tools between big commercial offerings like Lightworks, Resolve, Blackmagic Fusion, and the handful of very prominent open-source tools like Natron and Blender. Every project is going to be both chaotic on some level, and kind of protective of specifics.
Pixar in the past has been pretty open about their workflows, but those workflows are also full of their own tooling and technologies, a lot of which they've open-sourced: https://github.com/PixarAnimationStudios (cf. this 2020 interview with Nick Porcino, which calls out some things about open source: https://www.aswf.io/bts/nick-porcino-pixar/ )
Oh hey, speak of the devil, ASWF just put out a report a couple weeks ago about open source in entertainment: http://report.aswf.io/ (PDF)
You can check http://vfxplatform.com/, which is a high-level attempt to provide at least guidelines for a reference platform, and this SIGGRAPH 2021 presentation by Nick Cannon (Disney) and Francois Chardavoine (Lucasfilm/ILM) on it: https://www.youtube.com/watch?v=i4tXrtJBqK0
You are not alone, the strong marketing from Adobe and Apple creates the impression that image creators are all about having Adobe software on the latest Macs. This is far from the reality. While most don’t correlate Linux with artists, we can argue that the most impressive and sophisticated computer imagery is mostly created under this platform. Actually, even when the studio is mostly Windows based, after a certain size, it gradually moves to Linux, like Scanline.
> we have the enthusiastic support of all of the major application providers for the visual effects and animation industry, including Autodesk, The Foundry and Side Effects Software. [...] Linux is prevalent in VFX, particularly in the larger studios where they build sophisticated automation pipelines that integrate multiple different software vendors’ products together.
Blender has a large market share. Other tools like renderman are used. Painting software such as Krita and others but Photoshop and Substance Painter are still king (windows). Unreal engine is gaining a lot of traction in the VFX digital set space.
Blender's not really used at the high-end facilities... It doesn't really scale that well on large scenes, and only had a Python3 API - VFX studios have only really moved to Python 3 this year.
Do we count the work behind latest Evangelion movie as high-end? Because IIRC they openly talked about using Blender in increasing amounts and sponsoring Blender foundation because of that.
I do wonder about that: it was also criticized for its jarring 3D effects (arguably, some of it due to artistic choices, but clearly not all of it), to the point where it was hot-fixed while it was still shown in theaters. So it might not be the advertising you might want for Blender.
But it might also be the consequence of its flawed production process.
In a past lifetime I was a big user of Fusion. A limited version is these days free as part of DaVinci Resolve. I think the paid version is only $300. Quite a good deal for such a powerful tool. Fusion can be used alongside with its friends After Effects, Nuke, Flame
They are a tiny company as well, yet they have employed kernel developers since the early 00s when Linux was far far from what we see today when it comes to software for artists. Keep in mind also that Wacom is more than just about artists, as they produce display/pen integrations for offices where customers need to read and sign agreements and tiny black and white LCD displays with pens for signatures.
All praise on my part, but it would have been nice if the blog post had used photos of people using their tablets with Linux rather than (at least to my eye) Windows. '^^
No disagreement here, but to me 1,000 employees is small when talking about a hardware company that as far as I know does a very large part of the manufacturing in-house. Also, I think you forgot to convert JPY to USD, at the time of writing that means dividing by ~120 which makes the numbers less insane.
High-end digital imagery has always had a use case for Linux because being open lets you bodge together custom software and hardware to create bespoken pipelines - Wacom just followed along with that demand as it picked up in the early years of desktop Linux, as a form of B2B client service.
Proprietary apps in this space still gained most of the market by aggregating the most common features, giving them some UX polish and shoving them into the education pipeline; the open alternatives have always been in the background, since they are, in fact, too professional, in the sense of "you have to make it your job to understand this thing", and didn't aim to make the common things particularly easy. That's only changed with time, gradual iteration and industry support to break the monopolies.
I think it's going to become more common. With Steam Deck and accompanying improvements to Proton, and Chrome OS supporting it too, the year of Linux on the desktop is starting to become less of a joke.
I worked as a Linux engineer at Disney Animation for 4.5 years and can attest to the great driver support Wacom has on Linux. We frequently tried new devices and always had support (user space tools were a different story).
Mostly because the animation/GFX industry doesn't care about FOSS religion and doesn't have any issue using vendor binaries or custom distributions.
Thanks to SGI they were deep into UNIX already.
Just like AAA studios are used to POSIX/UNIX flavoured OSes on Nintendo/Sony game consoles, macOS/iOS, Android, game servers, and that hasn't made them care more about people using GNU/Linux for their games.
Hell, I don’t care about the FOSS religion. FOSS is nice for the inherent benefit of just having the source code and being able to patch stuff, or pay someone else to, so I definitely like FOSS as a concept. But I don’t buy too much into the more preachy bits, myself.
The true difference isn’t whether they buy into Richard Stallman style FOSS evangelism, it’s more whether they give a shit about FOSS at all. To me, when I think Nintendo or Sony, I suspect Webkit and BSD are just convenient. They happen to be some of the best options around and you don’t have to pay a license fee for them. Win/win.
The animation and VFX industry seems likewise. UNIX workstations are just a norm, nobody cares much if the code is available.
In many ways, this makes sense. However, it’s also a mindset shift that happens over time. I don’t think anyone should buy too hard into the moralistic preaching, mainly because even if you truly believe it, it evidently doesn’t work very well to accomplish anything. What sells people on FOSS is seeing the benefits in action. As developers, it’s very easy for us to see the benefits of FOSS in action because we can modify and compile the source code to Inkscape if we wanted to. In some cases, the benefits have become self-evident in seeing collaboration between major entities and the effects of patron-based funding, as has been seen with Blender and Krita. In that way, the average person has been able to see how it can work well first-hand.
On the other hand, industry workers who use it because its standard-issue may have a very different and more pessimistic view of it.
I gotta say that I can remember seeing wacom support in GNOME and KDE settings apps since forever it seems like (2004? 2005? I'm getting old I think). I've never used one of their devices, but the brand has lodged itself pretty firmly in my mind for the goto artisticle input methodology, just because of their Linux support.
As an everyday Wacom user who switched to Ubuntu last year, I absolutely love that I never have to download and install those suspiciously massive and slow Wacom drivers. Everything just works!
Wacom have had solid *nix support since even before that. In the mid-late 90s the company I worked for was using Wacom tablets on NeXTStep and Solaris workstations without any problem.
I have a Huion Kamvas Pro 16 that I struggle to get to work with Arch. There are multiple projects out there on Github that have slapdash python drivers with various degrees of functionality. The previous most functional one is no longer in development. I was unable to successfully get OpenTabletDriver working even though it is a supported device. Multiple monitors? Forget it. I had to run the pen display on its own to even have a chance.
Wacom stuff almost always just works.
I really don't want to pay triple the price for (in most cases) a lesser functional device than the competition that is Huion, XP-Pen, etc. Can't wait until the day some form of kernel driver is available.
Have you looked into the digimend project (digimend.github.io)? They have drivers for a bunch of non-wacom tablets (including Huion) and provide tools and to make it easier to write new ones.
I have a wacom tablet. Just plugged it and started using. I have some colleagues who don't use linux. They have had bad experiences. I even saw an article or two on HN about wacom drivers phoning home.
If you aren't locked by proprietary software and wants to use a wacom tablet, I strongly recommend using linux.
Actually, if you aren't locked by proprietary software and wants to use a computer, I strongly recommend using linux.
I use my Wacom in similar fashion: plug and play. Actually, this is my experience with printers and scanners as well. A huge difference with how I remember Windows, but then again things may have changed there as well. I mainly use it to sign paperwork :)
On Windows, consumer HP is also 'plug and play' as in 'Windows downloads and installs a HP UWP app for you that tries to sell you instant ink and happens to also install your scanner driver'
my Wacom drawing pad was a bit of a headache for finding drivers on windows. It makes me happy to hear they're improving that, becuase the tools are great.
The ReMarkable E-paper tablets use Wacom technologies (battery-free pen, etc) on a tiny Linux platform. The result is the best digital writing experience I've ever had.
It costs even more, because they force you to sign up for their cloud service to get its signature advertised features, like handwriting recognition.
$400, and then another $80 for the "marker", which does not have replaceable tips like every other company that makes a pen-tablet device. Apple, Wacom, Samsung, etc.
So the opex is about $100/year, plus however often you need to replace the 'pen'. That's hitting $800 TCO for three years.
Or you can buy a previous generation iPad Air and Apple Pencil and get a device that is far more capable, has replacement tips, a stylus that supports pressure and angle, etc.
It baffles me that people start frothing at the mouth about Apple "forcing" people to use iCloud when damn near nothing on the device requires a paid iCloud account and there's pretty decent integration with open standards based systems...
...but the ReMarkable is basically PDF reader without their cloud subscription and HNers just can't shut up about how awesome it is?
Oh, and then there's the shady-as-shit return policy. They basically don't have one...unless you buy the cloud service. Which you have to buy in a minimum 3 month block, and is non-refundable.
reMarkable generation one owner here, that at some point sang praises. You are right that the whole cloud integration stinks and was introduced in a shady fashion. Yes, those of us that bought it early get the cloud service for free, but I have just halted my device a few versions prior to when it was introduced and will never again connect it to any WiFi. My personal pet peeve is the touch swiping feature that frankly 9/10 activates when I do not want it to – introducing it was a regression. Needless to say, I am not buying another reMarkable despite loving the hardware and having been promised that the second generation is accessible via SSH.
It is not fair to compare it to an iPad though. The surface and pen has a feeling of resistance (in part thanks to the replaceable tips) that makes it feel like you are writing on a surface rather than sliding on a piece of glass.
While the first generation has a budding pure Linux port [1] and is somewhat hackable [2]. I am holding off any purchases or changes until the PineNote [3] becomes a reality and hopefully my old reMarkable will survive until that point. Note taking on a digital device has become essential to me to scale with more people and projects in my work, so I need a device like this. The PineNote also rocks Wacom technology for that matter. ;)
There are multiple choices for various non-glossy screen protectors for the iPads, some specifically designed for more natural writing/drawing feel.
On-device handwriting (and speech) recognition, and if it's a new enough device...on-device Siri.
Optional cellular connectivity.
GPS.
Top-notch high resolution color screen.
Probably the best device security in the entire marketplace...versus a tablet that is forced to sync with a cloud service located in a nation controlled by the chinese communist party, who considers extensive economic and industrial espionage, as well as 'thought policing' and human rights abuses, to be something of a sport.
Massive accessory and app ecosystem, including most major cloud services.
Works with self-hosted sevices like Nextcloud.
Superb integration with other Apple devices, including functioning as an external monitor.
Fast charging.
Cameras and microphones for video recording, conferencing/calling, and biometric unlock.
Firstly, as I hope was clear from what I wrote previously and you are responding to, I am not defending the stupidity of the reMarkable cloud service and keep the WiFi off in the same way that I do with my Kindle. To me, fewer features and a “simple” device is not a negative but a positive as I want a device that only does note taking. If you are happy with the Apple ecosystem – like plenty of my colleagues are – there is nothing inherently wrong with an iPad as an option. It is however arguably less hackable and less supportive of FLOSS, which to me were things I cared about when I got my reMarkable about three years ago.
It's like you're trying to sell someone who wants a small electric vehicle for single person 5km daily trips a family van. Yes, the van fits more people but that's completely irrelevant because the need is different. Remarkables and similar ( Boox) have e-ink screens which is great for eye strain, and get a near paper like feeling when writing. You get them for those, not for their camera or whatever.
> It baffles me that people start frothing at the mouth about Apple "forcing" people to use iCloud when damn near nothing on the device requires a paid iCloud account and there's pretty decent integration with open standards based systems...
Mind to elaborate on that? I just surrendered and started to shell out 4€/month for extra GBs in iCloud because I couldn't find another way to easily and OTA backup iPhone/iPad photos to my local Linux NAS and then my cloud of choice. With Android I do it easily with SyncThing.
When I went to the store and added a remarkable 2 to my cart, they tried to upsell me on a more expensive "marker" and a breathtakingly expensive premium case, but not tips...when they didn't try to cram at least one package of those down my throat as well, I just assumed that's because they didn't have replaceable tips.
Clearly someone in their marketing department needs to be disciplined for this missed opportunity.
$130 for a cover! That's $30 more than a fucking Logitech case with keyboard for an iPad. Fool, money, parted, etc.
It's an awesome PDF reader too. I just got one and it's nice, large, a pleasure to handle. I opted out of their cloud though, I don't need any of the functionalities.
And let's not forget to mention that their OCR is borderline useless. This does not render the whole device useless though, but if you buy it for that feature, then better don't.
Wouldn't recommend buying it anymore, got a ReMarkable2 myself, but the cloud subscription stuff was just not cool. But at least you can connect it via USB to your desktop and transfer your drawings directly, instead of getting them mailed.
Weird then that HN doesn't jizz its pants over Boox devices. Android OS, backlight (remarkable doesn't have a backlight), even color screens available.
I use a Xencelabs tablet - this is the premium brand of Haomon Ugee. While I haven't tried to research the topic in full depth, Ugee operates a number of different tablet brands(the best-known to the West being XP-Pen) but apparently has them working in independent groups, so the products and experiences differ substantially. The Xencelabs team was built out of former Wacom team members, so it's a cut above just on pedigree, and their first products have followed through in mine and most reviewers estimation. I can say for a fact that the Linux support is actually pretty great.
Wacom's downside is simply the "monopoly curse" - they had the patents to EMR technology, so they had no real competition for a few decades. Now that the original patents are expired the competition has appeared, and Wacom has stepped up and made some good product releases lately.
I work at a VFX company where all production is done in Linux and many of my colleagues prefer a Wacom. It's especially important for texture artists who actually have to paint textures. I mainly use Houdini and edit code so I don't use a Wacom, but we have had them at the company for many years and they do indeed work very well in Linux! (And now I know why!)
What's the situation with Wayland? The post mentions X driver, but I suppose Wayland compositors need to handle styluses and drawing tablets in some different way?
> A tablet should work as long as both the system compositor and the application you are using both support the Wayland Tablet Protocol.
> The Wayland Tablet Protocol is supported by the most common compositors in Linux: Mutter (GNOME), KWin (KDE Plasma), and compositors based on wlroots (e.g. Sway).
Oh, but it gets better. When the furor over the app usage collection hit, they disabled the data collection via remote server 'kill switch' (the driver hits an XML file.)
About a month later, after the furor died down - they flipped it right back on.
I use a cintiq 13HD on GNU/linux with the wacom driver, and I can report that it does not phone home in any way.
I use opensnitch to explicitely whitelist (or reject) any app that wants to access network, nothing related ever popped up. Additionally, I run my own dns resolver on a raspberry pi to monitor dns request logs of all my devices, and I confirm it doesn't log anything when I plug my tablet and start using it (this doesn't help if the software tries a direct connection by IP).
Additionally, I've read the code of the driver, since it's FLOSS, and didn't see anything suspect in it. I can confirm this is actually the code I'm using, because I'm running Gentoo and I inspected directly the tarball used by my package manager to build the driver.
So, obviously, I could still have missed something, but it looks to me like solid enough evidence for me to trust the software (at least, its current version). If it tries funny business in the future, I'll see it pop up anyway (plus, I block all requests to google-analytics ). My bet : Wacom put those "features" only in windows/macOS driver, presuming users of those OSes don't care about privacy, or at least not as much. This doesn't make their behavior acceptable, obviously.
EDIT: I realize there are actually two parts in the driver, a X input driver and the kernel driver. The one I checked was the X input one. I've just checked the sources of my kernel (directly the ones used to build the kernel as well), and there is no issue there either.
To me, because the world is too complex to allow us to deal in absolutes – especially when dealing with corporations. Wacom deserves to have their name shamed when it comes to spying on their customers, but they equally well deserve praise for their Linux kernel drivers and libraries. As far as I can see there is no perfect option out there, so for now I (as a “FLOSS person”) have no choice but to settle with an imperfect alternative (or world for that matter) and praise the good parts and shame the bad while hoping for things to gradually improve.
Not sure whether it's any consolation but the drivers do work when there's no internet connection (like on our intranet at work). Presumably this means it shouldn't be hard to prevent them phoning home even when there is a connection available? It's surprising to me that there aren't forks with this aspect removed.. maybe there are and I just haven't looked far enough?
To be clear, if I recall correctly the spyware was in the package of (usually rubbish) software that came with the drivers for Windows (and maybe also macOS?). The Linux drivers are upstreamed in the official source tree and there is no way in hell you would be able to do that with spyware.
That is only in the Windows driver (which I don't think is FOSS) and just one checkbox to disable/uncheck on install.
Also (probably going to get crucified for this), what is the big deal? It is not like it is sharing any sensitive or personally identifiable information and it asks when you install the driver whether you are okay with that. So it also was never hidden, people just don't read.
I would also prefer an unchecked instead of checked box by default, but this seems to be blown way out of proportion.
If only XP-Pen tablets had touch functionality. :)
After using a smaller sized Wacom (with touch functionality) for ages, I wanted to move to a bigger tablet. The XP-Pen tablets are much cheaper than Wacom, but none of them (nor the other clones/alternatives) seem to have touch functionality.
In my case, I ended up getting a good medium sized Wacom 2nd hand on eBay due to the previous owner having lost their pen (I had one anyway).
But having touch support in XP-Pen/Huion/etc would have made them possibilities too. :)
Never thought I'd see the day when Wacom would use their Linux support as a selling point. Then again I recently played my favorite game (Rez) on my favorite desktop OS (Linux) so wonders never cease. 25-year-old me would be so stoked.
> Never thought I'd see the day when Wacom would use their Linux support as a selling point.
I never thought any consumer-grade product would, lol. Also we recently got Steam Deck out in the market, so maybe Linux is really becoming a desktop OS...?
Every year can be the year of the Linux Desktop if we believe hard enough!
Jokes aside, it's in a really good place. With everything being web based now, Flatpak being a defacto "can run anywhere" target for proprietary apps, community based wrappers for most of said apps which don't have official builds, Distro's like *buntu[0] and Fedora which are easy to install/use, Valve pushing Proton, and Windows pushing away enthusiasts, it could actually happen.
Even better would be a company selling chromebook-grade hardware with one of the above mentioned distro's preinstalled in physical stores. I wish System76 would be able to do that, but I think it's a way's out from being feasible still.
[0] Ironically, I don't actually include "Ubuntu" proper in this. Snap is a dealbreaker for me. Mint and Pop are my favorite *buntu's, followed by Ubuntu Server with Snap removed and a DE installed.
I get that it's a meme at this point, but I don't know why anybody thought there'd be a year of the Linux desktop. It was always going to be very gradual.
i was surprised when i installed Linux and without doing anything a Wacom tab in the system settings was there and the tablet just worked without having to do anything, same with my printer just works straight away unlike Windows i still have not printed in color on my espon printer just refuses to do it.
My Intuos 4 still doesn't have really good OLED and wheel support but it's got a flawless pen interface. I'd just love to be able to easily zoom and rotate the canvas in an incremental way. It might be a question for the Krita developers though.
I have an XP-Pen 22 from 5+ years ago. It is still going strong. It was ~$500 back then and I opted to get the one that had buttons on the side. My primary OS was Ubuntu, and the XP-Pen 22 did not have drivers for the buttons in Linux. The tablet and drawing was spectacular, but I paid extra for the buttons. So I ended up dual booting with Windows, to get the buttons to work. The exact model after mine had Linux support for buttons. I was just a year too early. The only other complaint I have is that even in Windows, I cannot get it to dual screen no matter what. The tablet and my other monitors will split the view, but I cannot get the inputs to recognize just the screen my tablet is on. If I put my pen to the tablet, the cursor will shoot way over as if all the monitors are 'one giant screen'. I have to think they have improved on this and for the money saved, I will buy another XP-Pen someday. I am an amateur artist and a strong software developer(not designer). So, you have to take my review with a grain of engineering salt. To me, Wacom is a solid brand for professionals. However, they make me think of macs with their pricing..
I've used a Wacom tablet with Fedora for a while and it works really well! I find a lot of value in drawing while programming to work out problems, and being able to easily put those drawings then into Obsidian is just great.
It seems that the news from Wacom applies to both pen displays and tablets, but since they only show photos of pen displays, I was wondering — under what circumstances you would choose a pen display over a tablet?
The tablet I'm using (Huion HS611 on Mac) only serves as a tool for diagrams and sketching ideas, so laying it flat over my desk would require me to bend over or lean over to use it (instead of sitting straight and looking at the screen while using a non-display tablet).
You want to look at what you're drawing/writing. I like pen displays better because I could never get over the disconnect of draw on tablet, everything happens on monitor. You also have to make sure that your tablet has the same aspect ratio as your monitor and for a little while I had a 16:9 tablet with a 16:10 monitor and it was juuust off and it took me a bit to figure out why. Pen displays being the display don't have that problem. Its also 1:1 which I prefer.
Yes, I get the disconnect feeling argument, but I'm not sure if it is a learning curve thing or a something more fundamental for professionals in specific fields.
As for the ratio, that's configurable in both Wacom and the tablet I'm using (but, yes, when not set to match it can give that uncanny valley feeling about my own hand)
I imagine it's probably mostly learning curve, but having tried for at least a few dozen hours to get over it I'm not sure the pain is worth the possible gain especially when pen displays are pretty cheap these days. For some workflows perhaps it's easier or not as needed to adjust. If I was just retouching photos as opposed to drawing I don't think I'd be as bothered.
The pen displays include a stand to put them at an angle. In essence this is the same posture issue that you face when writing or drawing on paper. Some professional desks could be angled upwards for that reason. It's practical with pen and paper, but less so with computer equipment on the desk, so that feature disappeared.
The posture you'd want when writing on a paper is that of a flat desk that allow your elbow to lean still so that your palm and fingers have full control of the pen.
This is (I only guess) not the case for artist _painting_on real canvas and using broad/long strokes against a canvas, and for that kind of usage I understand what might be the benefits of an angled pen display.
I was surprised how good my Wacom Cintiq 16 works with Fedora 35. Unfortunately Adobe and linux are not compatible and I cannot find professional replace for Photoshop, Lightroom, InDesign and Illustrator. It would be ok to replace Premiere, After Effects and Audition with DaVinci Resolve.
Wacom support on Linux has been good for over a decade, but not so good on a wide screen.
I got a 4k widescreen monitor and I still can't use my Wacom very well. The aspect ratios are completely off. I tried manually setting it but often these settings get lost after an update.
But my experience from the past has always been plug and play.
I have a Wacom tablet and pen (Intuos 4) and it always worked perfectly on Linux.
Sice I use 2 monitors, years ago confining the pointer to one monitor was a matter of 1 command in the command-line, but since some time KDE allows me to confine the pointer in the settings GUI, so even that inconvenience went away.
XP Pen. The support is great. I bought for my son an artist 12 with the fear of some malfunction on Linux but everything works really fine. Or better: there's a real configurable software and you can set everything about tablet and pen. Really really cool.
Didn't know wacom was supporting it, kinda just assumed these were community made drivers. Either way, spend a lot of hours with wacom on linux (playing osu!). Also nice that it doesn't have the shitty telemetry that the windows driver has.
I wonder whether the problems with Windows Ink have been a driving force behind this move. I think anyone that has used Krita on Windows has run into issues related to Windows Ink while using a Wacom tablet.
It's really nice that it works so well most of the time, but I tried fixing an annoyance with the wacom kernel driver once and I gave up because that driver is a mess internally.
Circa 2000 my wife and I were working on a book about doing graphics work with Linux (never got published) and Wacom was gracious about sending us free hardware for testing.
Well that's just because the manufacturer just doesn't write one, not because of some intrinsic factor of FOSS. That's an odd point to make when it's the manufacturers' fault.