> (Unlike say LLMs where GPT-4 is clearly dominating for now.)
A lot of this comes from people comparing GPT-4 to e.g. LLaMA-7B, because that's the thing that fits in memory on their laptop. Whereas you can run LLaMA-65B, and it's dramatically better, but it uses about 128GB of RAM and the hardware needed to run it fast is expensive.
And GPT-4 has even more parameters than that, but that's not a matter of the tooling, it's that someone needs to release a public model with more parameters.
That's part of the point though. I get better results from stable diffusion on my PC than out of DALL-E 2. (I still have some credits there, but little reason to use them.)
I can't do that with LLaMA-65B. (Although to be fair 128 GB RAM is not that much.) But I suspect it's still far less capable than GPT-4, is it not?
It depends what you're trying to get it to do. There are some prompts where the expected output is a piece of code or a paragraph containing some information, and once you reach the threshold where the code works or the information is correct, there isn't a lot of "better" left to get.
Then there are ones where more parameters matters.
Conversely, LLaMA doesn't say "I'm sorry Dave, I'm afraid I can't do that."
It's a popular GUI for stable diffusion models with many extensions. Like the sibling comment points out, everyone calls it like that because that's the handle of the original maintainer. (As in: Which web ui? Auto11's.)
It would be cool to see that become a GIMP plugin next, I think that would be more of a direct alternative to the workflow using generative fill within photoshop.
"....dropped...." in the context of new albums/music being released means "made available", however in the context of software features (or general english vernacular) it means "removed".
examples
"When Weird Al's song 'White and Nerdy' dropped, I stopped everything so I could listen to it."
"In the latest news, Microsoft has dropped the ability to log on locally to your PC. All logins require internet connectivity and a MS Account."
This isn't a gripe about HN 'headline' rules; it's a complaint about a use/misuse of slang. Oh get off my lawn too please.
Haven't looked up the etymology, but I've usually heard it in reference to the recording industry. I assume it either refers to dropping a record on a platter / needle onto a record, or "dropping off" new releases at a record store.
I read "dropped" as some feature they had and removed 168 hours ago causing fallout and more anger with their subscription model, a bit overloaded of a term I guess.
Same. I saw a reddit post last night about someone complaining that the generative fill wasn't working on their machine.... so when I saw this headline, I first thought that they'd rolled it back in a sloppy way.
The work done by the Photoshop devs is extraordinary. The artistry by some of the people creating these illustrations is similarly excellent.
These bullshit copy-paste threads from AI "influencers" and devrel hacks are a scourge - bandwagon "content" from people who produce nothing of value themselves other than tricking people into buying what I can only assume is $500 video courses repackaging 6 month old blog articles from someone else.
There are courses about "how to go viral on Twitter" that work, because the algo is so cookie-cutter and easily gamed. Back in 2022, it was the "collect list of useful links, add 5 of them to a tweet at a time, and make it a viral thread with that annoying finger pointing down emoji"
i see lots of theft too, even on Linkedin under their name they just post someone else's AI montage videos to score a like or two for their own worthless self. people are nuts.
Not to be all hipster, but outpainting was available in Dall-E in April 2022. Impressive, yes, but not really all that novel. I did this a year ago: https://www.artstyle.ai/uncropping-movie-posters/
Cool stuff. Did the OP think this was the best list aggregating uses of Generative Fill? Or is this kind of a hit at the AI "influencers" jumping on everything that is released these days? Or is there a better list? I see some links showing up in comments here.
After seeing things go from 0 to Stable Diffusion and beyond, the fact that GenAI can do x,y,z; well, seems like a given eventuality.
I remember when CS came out and it had the magnetic lasso tool, overnight people could save hours and hours of time in their work, this reminds me of that.
Waiting for some camera company to release a camera that provides the original image hash that is signed by an embedded cert on the camera. I assume going forward, that'd be one of the only ways to use photos as evidence. That there'd be a proof of originality.
Until you find a way to emulate the photo sensor itself and feed it pre-rendered AI garbage that it thinks is live footage.
A tangent but I think journalism is going to start to actually mean something. Historically I think it's been most problematic bullshit, and especially in the past few decades it's pure propaganda and greed and profit. But in an age where we can't trust literally anything we see, it's the reputation of specific humans we'll trust, and we'll have to trust that if they say they saw something, it was real. We'll start to see "real" journalism.
I agree with your point and would add that it's been this way for decades prior to AI. Many are focusing on the fact that AI is recently making it easier, cheaper and faster to do hard-to-detect, high-fidelity photo/video manipulation. However, another equally important change from AI is widespread awareness that such manipulation is not only possible but relatively easy.
Previously, such manipulation was possible via manual methods requiring time and unique skill which limited it to highly motivated, organized adversaries and it was much more useful because most people weren't aware it could be done. IMHO, all the AI examples we're now seeing bemoaned in the media as signs of future manipulation are, in fact, rapidly diminishing the effective value of such deception.
Nikon and Canon both have "image authentication system" features; unfortunately the easiest to find articles are about them being broken... back in 2011.
Likely would have to be a depth camera. iPhone is in a good position to do this with the rear-facing Lidar and front facing IR projector, plus onboard secure element chips.
Few people have screens as high resolution as even a mediocre phone camera.
Before someone brings up future screen tech, there are additional reasons that's not a trivial workaround. Additionally there's a higher benefit to having high resolution cameras which have a wide field of view and the user may want to zoom in on the physical data vs a screen where the zooming happens in software.
Photos will be used as evidence in court in the same way they are used now. If you don't stipulate to the authenticity of the photo then the photographer must come to court and state under penalty of perjury that the photo is an original that they took and that it has not been modified.
Of course, nothing stops anyone from lying under oath. Police do it every day.
Midjourney can create very polished output, but imo generative AI art really shines in combination with ControlNet, where you can exert a lot of influence over the generative output.
Is it based on Midjourney? So you need an internet connection and you're sending potentially highly encumbered/lock and key assets to a remote server and third party?
Oh yeah, can't see how this is going to backfire at all...
Adobe has no moat on creative software anymore. As impressive as their demos are, I'm actually less bullish on their future than ever before.
The open source community is doing more with GenAI and they're doing it better. If Gimp weren't in such bad shape, Photoshop would be over. This exact feature would already be ready to go and would be far superior to Adobe's version.
Any startup with minimal effort and capital will be able to duplicate this product. They'll eat Adobe's thick margins away.
I have to keeping saying this on HN: moats have only a little to do with technical superiority. For the most part, moats rely on stuff like brand, UX, integrations, compatibility, business arrangements, past familiarity, etc. It's very rare for technical superiority in itself to override all that. Even if Gimp could match PhotoShop technically, it would still miss all the plugins and UX. So OSS often doesn't manage to overtake paid offerings. It depends on special people who are doing the extra effort.
That said, OSS is in way better shape in the image space than in the LLM space. There are actually products and installers based on SD out there, something that won't happen in LLM space for awhile.
Look where Blender has gone in 15, it’s a real alternative to other programs in the same segment. GIMP is a failure in that sense of never working with or listening to the users, and never understanding the userbase and having the wrong kind of developers attitude.
Even Godot is going with that playbook like Blender and doing what GIMP never did. It’s a shame imo.
Hats off to the Blender team, from a piece of software so obtuse the right mouse button was the select button they really just sat down a few years ago, started addressing new user complaints instead of attacking them like a lot of open source projects do. Started seriously innovating, fixing the interface and pushing things forward and now it's truly paid off they're a major competitor and the next generation is all learning on Blender.
Imagine a world if GIMP and other open source creative tools had acted this way instead of the status quo of "Patches welcome" and "NOFIX", the attitude in the 21 year old "Adjustment Layer" feature request comes to mind...
Gimp is awful but I don't think Krita is the replacement. It seems great for drawing but 90% of my Photoshop use is image editing and Krita seemed to lack a lot of features in that department. Or I just have trouble navigating their UI.
i am not a fan of adobe but nobody has been eating any margins or market share from adobe, really (c1 aside). skylum tried but guess what, having a half-baked, barely QAed product that you can't rely on is not competing with adobe. adobe's photography products are full fledged and at 10 bucks a month not even expensive.
but thinking you can fight adobe's photography setup (lightroom, ACR, photoshop, bridge etc) with minimal effort and capital to gain users for 6 bucks a month is delusional. but hey, if it happens, awesome.
Affinity is making slow but surely inroads. I know lots of people who use it know vs almost no one a few years ago. They stand the best chance of eating some of the cake.
Yes, though it's an open secret their development teams are barely functional and have management that can't manage much of anything. Even their forum staff are snippy and rude, discouraging people from reporting bugs etc. Things have stabilized over time, but it used to be the case that both Designer and Photo were absolute minefields to use - opening certain panes would crash, saves would corrupt, etc.
>The open source community is doing more with GenAI and they're doing it better.
Stable Diffusion is extremely cool and is a great project, but it does not compare favourably to Midjourney[1] or Adobe's offering. On the LLM side, StableLM is terrible compared to competitors. Some other "open source" models aren't open source at all, but instead were the "this doesn't compete so let's just dump it in the open" reactions.
Add that Adobe has a moat of legally licensed content that they used to train their model, which might turn into a massive differentiator if penalties fall hard on competitors.
[1] - A lot of people speculated that Midjourney used Stable Diffusion, but the products have veered so completely away from each other, the former just dramatically better for almost any prompt, that if they started with it they have made enormous improvements.
Midjourney has incredible default output, but the amount of control you can exert on the generation is extremely limited. Can't influence composition, pose, colour scheme (only to a degree), not inpainting, no outpainting, no API, no local operation, the list goes on.
So far nothing from Gimp is good. Sorry. I am yet to see an impressive AI demo in Gimp. They might have done it first or whatever but none of it looks decent.
I don't think this is true. Imagine a piece of software you have been using 15 years. Sure it has its problems, and there may be ways to work around those problems, but you are familiar with those problems. Switching to something new may fix those problems, but bring an entirely new set of their own.
> and you're sending potentially highly encumbered/lock and key assets to a remote server and third party?
As the joke says:
Patient: Doctor! Doctor! When I push here it hurts.
Doctor: Well, then don't do it!
If you are working with potentially highly encumbered / lock and key assets then keep them on your computer. If you are worried that your employees might leak them, then block the connection with a firewall.
That's easier said than done, though. Adobe is free to change how the connection is made which means that sysadmins would have to set up elaborate configurations just to continually check how it's done, and especially in shops where the primary focus isn't Photoshop, but a mix of tooling, that just creates more hell on top of the existing amount of hell sysadmins have to deal with.
I never get this criticism. When the internet and computers first rolled out, everybody realized the dangers of security with devices. Paper and pen would always be safer since you could lock it up where no-one could access it.
Eventually, the cost benefit shifted greatly towards being less safe because of how much faster people can be. Anybody using AI to generate images is going to be way faster than someone not. Any company selling generative AI is going to be miles ahead of open source alternatives.
Anyone not embracing or figuring out how to bend the rules around assets is going to get smoked in a business.
I'm not sure they needed to wait until there was some enterprise data solution before enabling it. It looks like a useful feature and if someone has assets that they're fine using it with, do they really need to wait until some enterprise solution is built to handle the other cases? Is the argument that the feature is so useful that people will send enterprise data/assets even after being told they shouldn't and that the feature shouldn't ship until those people are protected from themselves?
> potentially highly encumbered/lock and key assets
I'm not sure what this phrase means, can you explain?
Given the demos in the thread, I'm seeing a lot of 'production line' graphics being done a lot faster and easier with this feature. I'm not seeing a problem, but I'll eat my words if this will backfire like you said.
(I won't eat shoes or anything though, I'm not that confident!)
Meaning, I have a client that has a very, very secret project worth millions to my design company. One of my junior employees working on the project uses this feature. The client's assets get sent up to a remote server, which is either breached, MITM'd (proxied, etc.), or otherwise compromised. Now that project is considered leaked to the public, perhaps literally so.
Junior loses job, company loses millions, potentially the client relationship, client loses perhaps even more. Everyone loses. All because it's not clear this feature sends your project files to the remote server.
A fun behavioral psychology test you can use generative fill for: create a dating profile filled with photos of you in exotic places, surrounded by hot people, and living your best life. Say you're looking for your special queen/king to complete your kingdom.
$20 says your inbox (man or woman) will be flooded.
Unless the cost of generating plummets - and I think we'll see a 10x decrease in that in the coming year or so - it'll probably be a credits system. I think this feature has the potential to quickly become Adobe's primary money maker. I think a lot of companies are suddenly finding themselves in a lot of money with the current hype, e.g. midjourney, openai, etc.
An Adobe employee wrote on Reddit that they plan to allow an unspecified limited number of generations per month, included in the current subscription (even if your subscription is annual). Beyond that there would be an extra fee for more credits, which might or might not roll over to the next month.
Unfortunately, the posts have now been deleted, but you can still see other people's reactions:
"Dropped" in the context of software means that something has been abandoned or removed; "dropped" in the context of music or fashion means that something has been released or published. This twitter post says that "Photoshop dropped the new 'Generative Fill' feature", using the word in its music/fashion sense to mean that Photoshop has gained a new feature; were Adobe to remove (or "drop") that feature someday, one could post the same announcement again verbatim, making use of the word's opposite meaning.
I know nothing about generative fill, but several of the examples look like cropped photos. Could they be cropped photos with the original in the dataset, making them far less jaw dropping?
Interesting stuff, it's weird how sometimes it nails the tone and context but sometimes fails completely.
With the Beatles it generates a psychedelic scene and an actual yellow submarine from what is really a very mundane street scene, indicating the model has 'knowledge' of who the Beatles are and what their aesthetic is.
With Master of Puppets, it utterly fails to determine the tone or even content of the scene.
Considering MoP isn't exactly an obscure record, it bodes ill for the future if generative art biases towards hyper-popular pop culture because it cannot represent less mainstream art in a convincing way.
(IANAL) Given the US Copyright Office's guidance on the lack of protection for AI generated art, doesn't this tool harm the ability of artists/photographers/agencies to protect their works? Will people starting blanket stealing of content using the "It was AI generated." defense?
Lack of copyright for "AI generated art" applies to fully AI generated art, ie, give pure text, get image. Merely using AI as one part of an overall human done composition or work, like to help fill in some cropping or backgrounds, isn't going to then negate everything else. So no if anything use in this kind of context would be one of the likely copyright preserving ones (unless it was abused to the point of legally negligible artist involvement).
Granted sure, if there was a serious lawsuit I'm sure going forward defense lawyers would do discovery to try to see if it was AI generated or not. But if it wasn't, then it'd be pretty trivial to show the original Photoshop (or whatever) files and where, if anywhere, AI was used and that it was just one part of a human created composition and that'd be that.
(This is not a reply to the current comment, I just want to make sure you see this, so I'm replying to the most recent thing you posted.)
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
As I mentioned, my post was not about your specific comment there but rather to your account's overall pattern of breaking the site guidelines. I chose to reply to your most recent comment only because that would be the most likely place you would see it.
If you're asking whether https://news.ycombinator.com/item?id=36146358 broke the site guidelines, I guess it was a borderline case. It wasn't very substantive but I don't think it was a swipe. It would have been better if you had explained what you meant more.
You should really try this out on your own before falling for the doom and gloom. It seems to work on specific things well I guess. But so far it struggles with perspective. It just inserts silhouettes too. Maybe im terrible at prompt creation or dont understand the feature
I've been playing around with it. I think it's neat, and it's impressive from a tech perspective, but I'm not terribly impressed with it from a content generation standpoint. I'm sure it will get better.
I believe that words mean what people decide that they mean, and those meanings can change, but my perception is that this usage of "drop" has got a lorlt more momentum recently.
> Built on Adobe’s best — powered by Adobe Firefly: Create confidently, knowing that Generative Fill is powered by Adobe Firefly, the new family of creative, generative AI models designed to be commercially safe — ensuring you can push the bounds of your creativity confidently. Firefly is trained on Adobe Stock’s hundreds of millions of professional-grade, licensed, high-resolution images that are among the highest quality in the market. This helps ensure Firefly won’t generate content based on other people’s work, brands, or intellectual property.
Any artist jumping on this now Adobe provides this tech, tech that has been around for months now, for free. Deserves what's coming for them.
I PROMISE you once Adobe has market saturation on this and people have embedded it in their workflows and work without it feels painful they're going to be slapping a $40-$100 (It replaces literal days of work and literal team members, they can charge what they want) a month price tag on it on top of Creative Cloud.
Artists once again, never learning from the past and handing over their future to an abusive company that seeks rent on their ability to earn.
All you had to do is install any of the countless Stable Diffusion UIs and you'd have this tech free forever, some of them even integrate into Photoshop.
No sympathy for anyone who chooses this path when we finally had an opening to harm the Adobe monopoly.
Yep it's easy... just download a wrapper app from GitHub, download the models, set up the configuration, realize you're missing libraries, install a specific version of each etc
Your average user doesn't understand how to set up and leverage open source software because it's too complicated to get going.
They want to open Photoshop and use the feature in the tool the already use, without setting up anything.
No offense but you sort of prove the parent’s point. People who work on computers professionally but don’t use open source because “it’s too complicated to get going” are making an error. Adobe’s UX being addictive is the point.
Many products have been successful not because they bring new features to the market but because they make existing features trivial to use. UX is everything.
It really is. Arguably the most famous example among HN readers was BrandonM’s comment[0] on Dropbox’s launch thread. Was he correct that you can built it trivially using 2007 software? Sure, if you had the correct mix of specialized knowledge of the software and networking. Dropbox was not a novel idea, but that’s now where it value was; the value proposition relied 100% on its UX.
If MSFT and Apple each shipped a Dropbox competitor that was highly integrated, available, and pushed flagship software suits to adopt it as well within a year, would Dropbox had become as large as it did? I would guess not. It would likely be a downgrade from what Dropbox offered (the separate ecosystems staying bifurcated comes to mind), but a zero install solution would be the only thing that could make it easier.
No offense taken, I'm simply pointing out that the
user experience of setting up and leveraging open source is a challenge to wider adoption.
I expect most of us (hackers) have tried these tools, but not your average user. When I see comments like OP, it's coming from the hacker perspective imo.
The user is not making an error, the error is assuming everyone should be a hacker.
The user is definitively making an error here. Especially when the user is actually not a user but a producer (supposedly artists). As an artist you should strive to have a certain level of ownership and independence.
It gets worse, there was a thread a couple weeks ago about Envato (ex ThemeForest) asking all their authors for "free/unlimited" rights for anything they have to be used to train AI models for NO compensation. It was eye opening how many "artists" and "producers" accepts the status quo as is, and think it's still a good idea to collect whatever this company throw on their way.
It's really hard to feel sorry for these people...
How many work hours would it take for the average professional artist to get to this level of technical proficiency? Given the opportunity cost I'm sure you could argue it would be a net loss for the employer.
Employers don't need their graphic artists to be able to use git and navigate Python dependencies. That's why they have IT.
The mistake companies make is in not fully embracing this.
Alright, once you get it running the UI for generative fill can be used by a graphic artist, and now you're several months ahead of your competitors, but still the UI is a little rough. But it's open source, so you have your techs make it a little better.
Now your company needs to make it easy to submit a pull request, or you're suddenly maintaining your own fork, which you're going to want out of as soon as you have an alternative, and soon your company is paying $$$$ to Adobe again.
I work with a lot of people that don’t care. They have a job with deadline, for which they already gained a lot of knowledge, and need to keep it updated. So no, the supplementary friction is not acceptable, even for « good » reason.
What type of "work on computers professionally" are we talking about? Working on a computer as a professional artist or writer is much different than working on a computer as a programmer/hacker professionally. For the first, the computer is just a tool like a pencil or a paintbrush. I wouldn't expect them to know how to figure out how to use things unless it has a very very simple UX.
its not addictive, its cutting edge and has a price tag for that. open source will follow up with the cheap entry point for the nerds. If you dont like that, then you need to throw proper money at your favourite github accounts because that is fundamental difference. Adobe is funding high end tech solutions by selling a product. if you have another way to to achieve that, do tell.
too many college canteen marxists in this place is my thinking.
From an economic standpoint, this makes sense. If an artist is bringing in 4k a month, then spending 40 a month for software maintenance and updates is rounding error. This also opens up the third part market to sell plugins if open source doesn’t work for them. All in all this is win win for everyone and more options equates to more freedom.
That is such a strange thing to read. I'm not contradicting that it's true in this case, but Basically every artist complains about how much of a clusterfuck the UI/UX is while being entirely happy that the UI is horrible enough that intimate knowledge of it keeps them with a job
I'm not saying it's easy, it's Python so its utter hell at times but I'm saying it's worth the effort to have something free forever and easier UIs to it are coming.
Agree with you there, if these tools become 1 click install and go they could get wider adoption. However in the B2B space, they also need parity with commercial tools like Photoshop, Figma etc. It's an uphill battle for an open source project with no profits going up against a profit driven big tech company.
If you're a professional artist who's invested years of effort in learning photoshop and it replaces "literal days of work" or you run company and it replaces "literal team members" $40-$100 a month is cheap.
If not you can use something cheaper - or use one of the 3rd party plugins. If the workflows become popular, the 3rd party plugins will get better.
> If you're a professional artist who's invested years of effort in learning photoshop and it replaces "literal days of work" or you run company and it replaces "literal team members" $40-$100 a month is cheap.
No kidding. I have only negative things to say about Adobe, but that is a terrible argument. $100 a month to replace multiple $100k/year employees? That's basically the definition of technological advancement freeing labor to be more productive elsewhere.
the definition of progress. In IT we should be used to this. AI has been telegraphing its arrival for a long time. There should be no surprises to see jobs being lost in the name of tech advancement. Its been going on since forever. It is literally what computers do.
The problem with the AI generated images is that it's abundant and anything abundant doesn't have a market value.
So, if an artist's work makes 100$ per work an AI system which can generate the same quality work but 1,000,000 pieces a day this won't end up the AI system generating 100 million dollars a day.
It will simply rise the bar for artists instead. Which is a good thing for the society as a whole because the baseline will be very good and the work of the talented artists will be highly valued.
Adobe can eat the market of mediocre artists if they are the only one with this technology so they can control the output rate and create artificial scarcity. They are not the only one with this technology.
The problem with AI generated images, in this context, is that they can be used to imitate the style of specific artists, meaning any unique talent and vision those artists have no longer has market value. The scenario you describe isn't actually a business model anyone is following, except maybe morons shitting AI porn onto instagram and trying to flog their patreons.
Obviously AI generated art has market value to companies with creative staff because AI is designed to devalue and commoditize the work of actual artists, such that the work of all artists can be replicated by someone on Mechanical Turk writing prompts for pennies a day (or some equivelent low-wage plebian drone.)
AI has already been used to steal commissions from working artists, and companies are already firing their entire creative teams to lean entirely into AI generation. The better the technology gets, the worse the situation gets for working artists because that's the plan.
I don't see how anyone can have witnessed the glee with which AI techbros danced on the graves of the art industry, ridiculing and harassing artists, calling them gatekeepers and fascists, and telling them their careers were doomed, and think this will benefit artists in any way.
I really want that to change, because I find this technology fascinating, but as of now the well has been poisoned, the bridges burned, war declared, and most artists with actual talent would rather starve in the street than touch anything AI.
Yeah, commission-based digital art seems likely to mostly go away, in the same way commission-based oil portraits mostly went away with the diffusion of photography.
It seems like generative AI will wipe out many design shops and non-IC/exec brokers/managers of creatives (recruiters, talent agents, admins, etc.) at least to the degree the web wiped out newspapers and their non-creative staff.
But it'll also enable creative designers/artists/writers to become their own studios. As Kubrick said, "one [person] writes a novel, one [person] writes a symphony." Generative AI will enable one person (or a very small team) to create a blockbuster movie or AAA videogame.
And we already got that with print for a while. You can create an Andy Warhol piece as well as his assistant for a long while now and no the market didn’t collapse or art disappeared. It open some door, close some other, offered new tool.
As an artist (creator) myself, I do not feel threatened. I do not consider the tedium to add anything of value to my work… and hell, it’s suck to integrate /generate fill (as an example)
> The problem with AI generated images, in this context, is that they can be used to imitate the style of specific artists, meaning any unique talent and vision those artists have no longer has market value.
AI can generate an image "in the style of Banksy" but so can a thousand other artists who aren't Banksy, and their work product would have no more value than the AI's.
The way this makes you feel is about more than texture and shading:
You might be able to get Stable Diffusion to generate something like that with a sufficiently detailed prompt, but then its uniqueness would come from the creative effort required in devising the prompt. It's not like you can just type "street art in the style of Banksy" and expect the output to make you feel something like that.
>I don't see how anyone can have witnessed the glee with which AI techbros danced... most artists with actual talent would rather starve in the street than touch anything AI.
Perhaps. But what would that change and do you have any choice? 'Techbros' can make do with genAI or even stock images. I'm not so sure about said artists.
>The problem with AI generated images, in this context, is that they can be used to imitate the style of specific artists, meaning any unique talent and vision those artists have no longer has market value
I Agree, those who produce the training data or the thing that the AIs work will be based on, should be properly compensated.
The artists that are hurt by this have neither a name nor a distinctive style, they are the ones whose work was derivative itself, who already struggled to get paid for expensive manual labor.
Then you have the artists who matter, whose work is valued because it has their name on it, regardless of what process was used to create it. AI will be just another tool they may choose to use.
The narrative that the only artists affected by AI are mediocre and derivative and thus deserve what they get, is just propaganda. AI was trained to replicate the distinctive style of artists who "have a name" and "matter," and those artists likely work for companies that are training in-house AIs on their own IPs as we speak.
"struggled to get paid for expensive manual labor?" You seem to have a contempt for most working artists, because most struggle to get paid regardless of talent, not because they're incompetent but because art of any kind has always been a difficult market. And yes, it's hard work, which many people fail to appreciate. You're really illustrating my point more than I think you intend to.
I'm not saying they "deserve" what they get, not any more so than the people who found themselves out of work after weaving or sewing machines were invented. That's just technological progress.
Perhaps we have a very different understanding of what "art that matters" is, certainly I'm not thinking of anyone producing art at volume for a corporate IP. In any event, nothing ever stopped such a corporation from reproducing a certain style with cheaper labor, whether it's human or not. If you were, say, an art director working for a corporation, your job was never to perform much of that manual labor. AI isn't replacing you.
Moreover, the emergence of mass manufacturing always comes with a counter-movement attempting to maintain the "human touch" and charging a premium for it. For every Budweiser, there's a thousand craft beers. Neither replaces the other.
Either AI created graphics arrived to late for the NFT craze, or they arrived in time to give them a second spring.
Being serious so, human art is what it is because it requires effort. AI art doesn't require nearly as much, is generic and abundant. In short, boring.
There's so many places where you don't need "real" human touched art but you could do with stylized custom images. Midjourney and friends make it a million times harder to make a living doing small time commissioned art, stuff like dungeons and dragons character and world art, furry commissions and smut, and hobby art in general.
I think the incoming popularity of generative AI won’t simply raise the bar, but at first devalue itself, then make AI generations irrecognizable. The public will start to fail to recognize an art is there, when human discretion is not found.
There is a sci-fi prediction that humans would eventually reduce to a thing with an eye and a finger that only presses a button, and machines would do the rest; I take it as an optimistic view that it means humans remain relevant, and that human values only come from humans.
> All you had to do is install any of the countless Stable Diffusion UIs and you'd have this tech free forever, some of them even integrate into Photoshop.
But Photoshop's generative fill is trained on Adobe's own stock photos, which removes an entire class of potential future litigation issues.
The more we perpetuate free and open models the less possible it will be to litigate against them. They can't sue us all.
If we move to this lie that Adobes models are "ethical" it gives them a moat to make free models illegal so Adobe becomes the only option.
Artists should be realizing the goal is to stop Adobe becoming the only option, if they do become the only legal way to do this then expect AutoCAD level pricing.
> Artists once again, never learning from the past and handing over their future to an abusive company that seeks rent on their ability to earn.
I think this is a little bit unfair and trivialises the problem. I know many artists who have to/had to use CC for their work because their clients or (mostly) employers consider it the industry standard.
I agree with the sentiment, fuck Adobe and decades of their parasitic practices, but the reality is not that black and white.
If you already pay for Adobe CC as many designers do, there's an opportunity cost to learning an entirely new system and workflow. Those are hours that can't be billed. The opportunity cost of switching would make sense if Adobe was charging 5x more for these features, but they're not.
At the end of the day, not everyone will need to "make" generative art. There will be plenty of it on Adobe Stock that clients will be fine with, especially once they see the hourly billing rates for custom generative art that requires multiple prompts to get right.
> All you had to do is install any of the countless Stable Diffusion UIs and you'd have this tech free forever, some of them even integrate into Photoshop.
You mean the ones trained off the artists art without permission, unlike Adobe's offering?
I know a lot of graphic designers that looked at that and hated it because of the fact that they took stuff without permission.
I don't know a single artist who was happy with that. The AI generation wasn't the issue. It was the training source.
I'm not an artist, and I'm also aware of the many failings of Adobe, but you really missed what the issue was.
The alternative is more likely - it becomes commoditized and many programs are offering it, so it would be strange for Adobe charge a premium for a baseline feature
>so it would be strange for Adobe charge a premium for a baseline feature
Alternatives to every baseline feature they currently charge a premium for exist, but because its industry standard everyone has to pay them.
Nothing Photoshop does today is special, in fact some of the ways it does things are worse than everyone else, e.g many filters being single core constrained and CPU bound.
You seem to be falling into a fallacy common among technical people: that a product's only value is its technical implementation.
Photoshop is sold to businesses. It's value prop is "you will be able to hire anyone and they will be productive day one, which will save a lot of money in training."
People buying photos op don't care about filters being CPU bound. They care about turning work around quickly and getting the next paying gig. A CPU bound filter is statistical noise compared to having to figure out how some different tool works.
> You seem to be falling into a fallacy common among technical people
I work as a creative director, my anger about this is decades of suffering what I consider substandard tools.
> People buying photos op don't care about filters being CPU bound. They care about turning work around quickly
We definitely do care about our computing power being used to the fullest of it's potential. This is why some of us even do 2D static work in After Effects now because its faster and uses more of the machines power than PS.
Photoshop developer asking: what examples can you give of 2D/static work that performs better in AE over Ps? I'm very curious about your workflow now...
Well I apologize for my presumption. Showing a little of my own frustration as a former engineer, now product person, I suppose.
I'm really curious why you care about using machines to their potential. Is it cost savings from not having to buy a more expensive machine, philosophical dislike or waste, or does it really come down to hours of non-billable time?
Adobe’s UI is also terrible, but it’s terrible in a way most digital artists are familiar with. Starting from scratch Gimp, Inkscape, and Adobe have reasonably similar learning curves.
The main advantage Adobe UI has is many people assume they’re stuck learning it at some point. But it’s definitely a waste of time and money if you’re happy as a hobbyist.
Adobe doesn't count hobbyists as a core user segment; why would they? Hobbyists don't need Adobe CC any more than somebody looking to multiply 2 numbers would need Excel 365.
The hobbyist market is already served by Photopea, Canva, Snapseed and many others.
Adobe has figured out how to charge a lot of Photoshop and likely will do the same for major additions to it. You can argue whether this is fair or right or that there are free atlerantives, but it is possible for Adobe and so they will do it.
So the fundamental problem and “promise” is that a company charges for the products it builds, or that professionals “deserve” to be paying for the tools that enable their craft?
You don’t have to pay for things! The models are just there! The code is just there! The images to train on are just there! The absurdity that you have some rights over some data is the only thing that could possibly cause you to pay money for a thing.
There's no free lunch. There's a reason the first law of thermodynamics is first.
Atoms are just there for grabs too. Yet, you pay for someone organizing them in some order.
At least that's how the world currently works. You can walk for "free" or take the bus for a "fee".
So nothing is actually "free", although the assumption that Adobe took some off the shelf models and off the shelf pictures and threw them in Photoshop, didn't take much effor either.
Creative Cloud is less specialized now, not more. They were able to defend their margins because designers were used to specific workflows. As the interfaces become more general, that advantage will erode. Adobe is working on sophisticated chain of custody for DAM which won’t matter to the bill of CC users, even in enterprise.
The productivity gains are becoming permission-to-play. The money saved will go into hiring different kinds of people which will continue to pressure the software budget. It’s true CC is seen as necessary, but it’s still resented and there are always employees whose access to a subscription is marginal, providing some elasticity.
>I PROMISE you once Adobe has market saturation on this and people have embedded it in their workflows and work without it feels painful they're going to be slapping a $40-$100 (It replaces literal days of work and literal team members, they can charge what they want) a month price tag on it on top of Creative Cloud.
I think 100 a month is a low price for this.... if it saves literally man-days of work. The equivalent value addition is something in the ballpark of 3k per project (assume 100 an hour over 3 8 hour days).
Add in the fact that your velocity has increased significantly, they could probably get away with at 5k a month at the "Pro Tier"
> All you had to do is install any of the countless Stable Diffusion UIs and you'd have this tech free forever, some of them even integrate into Photoshop.
this is the "anarchist" hacker mindset; fear of "the man". open source is always going to lag behind the huge, well funded corporate option because they have teams of highly paid tech working on being first out the traps. open source and corporate dont really compete except in the minds of "anarchist" hackers who don't like capitalism. Yet both worlds can exist, and do. Adobe have produced a mind blowing product. Open source versions wont be far behind. Both pathways exist in parallel. Open Source wont be funding the top of the range stuff ever.
I watched one dumb cat video and one Friends clip before realising I'd been redirected to a generic home feed after dismissing the log-in wall, and they weren't supposed to be 'generatively filled' clips... /facepalm
I don't know what the economics are behind it, but there have been a ton of vacuous AI boosting content in the last few months. I guess unlike with crypto you can't simply sell tokens or NFTs so all these bots can really do is just hype up the technology to get clicks and follows for a future scheme.
https://github.com/Mikubill/sd-webui-controlnet/discussions/...
So far it seems that the OSS diffusion models + tooling that we can run locally keep being state of the art. It makes me so happy.
(Unlike say LLMs where GPT-4 is clearly dominating for now.)