Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
No adverts like this in the forums anymore.
Times and dates in your local timezone.
Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
Email notifications for threads you want to watch closely.
Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.
2023/08/06 18:44:33
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
So, the situation is complicated. WotC have come under fire for quite a few things as of late, from Spelljammer's depiction of Hadozee to trying to sweep all 3rd party content as their own, to this.
On the one hand, we know WotC drives a lot of their artists way too hard, like any studio. There's reports from artists that have risked serious permanent injury to their hands and wrists to try to meet deadlines.
On the other hand, AI art, in its current form, is not ethical. It uses the work of uncredited artists to train their datasets, and if any money or fame is made from that art, the artists whose work was used unethically get none of that profit or credit.
It's a mess.
2023/08/06 21:19:05
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
LordofHats wrote: AI's aren't persons, first of all. They can't hold copyright by definition unless that changes and that probably won't anytime soon. Expression gets more muddy, but there' precedent behind the question 'so what did you do, tell this box to draw a robot and it drew a robot? Yeah that's not sufficient expressive on your part. No copyright for you."
This is less an issue with the law and more with people who don't understand "AI" anthropomorphizing it into a sentient being instead of a software algorithm. Using AI to generate an image is conceptually no different from using 3d graphics software to generate digital models and render them into a final scene. I put my objects in the scene, I set the camera, I tell the software where the light is coming from, and then the software does a bunch of math to generate a picture. That's clearly something that is protected by copyright even though I set things up and then told the software what to create. Nobody gets confused and thinks the software is the legal creator or asks how we can possibly assign IP rights to a machine. AI should end up the same way: the user of the tool is credited as the creator of the work and gets all relevant IP rights.
Automatically Appended Next Post:
drbored wrote: On the other hand, AI art, in its current form, is not ethical. It uses the work of uncredited artists to train their datasets, and if any money or fame is made from that art, the artists whose work was used unethically get none of that profit or credit.
How is that any different from a human artist learning techniques and getting inspiration from the work of other artists without credit or payment? It's perfectly legal for a human artist to mimic the techniques of another artist and none of the actual images in the training dataset go into the finished product.
This message was edited 2 times. Last update was at 2023/08/06 21:21:46
ThePaintingOwl wrote: Using AI to generate an image is conceptually no different from using 3d graphics software to generate digital models and render them into a final scene.
I would contend that, depending, it can be radically different.
For example;
I put my objects in the scene, I set the camera, I tell the software where the light is coming from, and then the software does a bunch of math to generate a picture.
The software only comes into play after you've done most of the creative elements of the piece. You set the scene. Defined the composition. Arranged the pieces. All the computer did was render a final image with touchups.
The use of generative AI can be as simple as 'draw me a cat' or as complex as producing reference material that gets so heavily touched up and covered over that it's unrecognizable from the original output. On that sliding scale, the latter isn't much different from other digital art, while the former is basically just a commission where the machine did quite literally everything.
Part of the confusion is that people think copyright protects ideas.
But that's not what copyright does. Copyright, even outside the US, is primarily based on protection expression. Ideas are too ill-defined, not to mention too numerous. Copyright doesn't protect something so abstract. The pitfall the lazy end of the generative AI spectrum will run into in current copyright law is that an insufficient amount of play on the part of user doesn't constitute sufficient personal expression, while the same wouldn't apply at the other end. But this gets into a lot of semantical bs, not to mention lots of silly word games, and ill-defined concepts.
So no. The issue isn't that people misunderstand AI. The issue is that the use case of AI swings between pushing a button and wanting the credit for what comes out (I suggest lying, I've seen very little interest in crediting this as anything but the latest form of cheap shortcut for the lazy), and more nebulous use cases where AI was used in the process but didn't solely produce the final product and it's almost impossible in a general sense to have a clear conversation on generative AI because most people have strong opinions about certain uses while not considering others, or have a varying opinion based on what the process was.
The only court cases so far to render any kind of verdict essentially ruled that AI-generated images couldn't be copyrighted on the grounds that the users were insufficiently involved in their creation to warrant copyright protections. Granted, that's like a whole two cases that have gone down that rabbit hole I know of, and both were an example of the former.
This message was edited 1 time. Last update was at 2023/08/06 21:46:47
It's going to happen. It can't be stopped. AI is here, and it's not leaving.
Thank you, this one sentence convinced me this whole thing will crash and burn just like nfts.
Your funeral mate. At least, if you're in an affected trade. My older brother is in training to drive lorries. Despite them just introducing self-driving buses up north, and lorries going down predictable motorways for most of their journeys, my father is adamant he won't be replaced. Because he doesn't want to conceptualise the mass unemployment self-driving vehicles are going to drive in those two (and possibly other) industries. Sometimes a desire to avoid technological movement is more down to wanting to deny reality than it is based in realism.
NewTruthNeomaxim wrote:People just accepting "AI" as here, and undeniable, are the reason Capitalism is driving the world off of a kleptocracy cliff. If we just keep accepting it because nothing can be done, and throw our hands up, we deserve whatever we get as a species.
At this point I am proud of humans whenever we draw a line in the sand and don't just accept fate because trying to change it would be too hard.
I will gladly pay a little more if I know that money legitimately goes to creators themselves, who want to earn a fair wage for creating something.
That's a fully legit viewpoint. And you won't be alone. But I suspect it'll be more like historicals or blacksmithing now, aka, something done by beardy hipster looking men selling for five times the price to a small niche market.
LordofHats wrote:
There are areas where AI is ripe to succeed. Interactive media like RPGs, tabletop and digital, is one where it's probably gonna be pretty sick. The same technology could apply to stuff like DnD and Wizards has already softballed and ballooned the idea of AI DMs as a feature for DnD Beyond. Interactive media has a huge potential future with AI. But it also probably won't be something you can just throw out, because everyone will have that ability. Hollywood and the actor's strike are a good example of the opposite. Don't just think about how this disrupts the labor market, think about how it ultimately disrupts the entire notion of industry. If an AI can make an entire movie for me, what do I even need the suits in Disney's office for? They're about as worthless in a world of AI generation gone all the way as anyone else is.
It becomes a game of 'who controls the most talented AI' and 'who controls the rights to the work the ai is trained upon'. We've seen a massive ballooning recently in animation into kid's shows, where they all use the same animation engines (the engine which Frozen/Tangled were made under has likewise been licensed out massively). How many games are made by the Unreal Engine? And so on. It'll be about connecting the people who have enough money to license the most developed engines with ai plugged into it, to the content that it works off (whether actors, art, general IP lore, or whatever).
That's a long sentence, so here's an example. Games Workshop wants to make a 40K film. So they contract with Google to license their new 'Google AI Film Maker', then with 'AI Actors Limited' for a staff cast of people who sold their images and recorded their movements twenty years beforehand. The chief exec's do some market research on what sells - 'Bolterporn. Make bolterporn!' - which then gets passed across to one bloke in their lore/IP department who is told to spend two weeks writing a rough but detailed plot. It gets fed into the computer. The computer spits it out, much more refined with staging and lines. The output goes to audience testing. They give feedback. The feedback goes into the AI and the film is refined. Repeat until film is ready for release.
GW isn't losing out here. Neither is the AI company. Or the fifteen people who got their cheques for selling their image twenty years before. But the current actors? The screenwriters? The stuntmen, the make up artists, the special effects guys, the whole crew who supported traditional film? Well, they're out of a job. We've just got Gav Thorpe sitting in a dingy office in Nottingham.
In comparison, visual arts like image I think are going to be more divided. There's art that people have very fleeting relationships with. Game assets. Book covers. Stuff that you look at once but probably never look at again. It's valued more for its function that its artistic qualities, and AI will probably edge into those spaces but I don't think companies that 'rely' on AI have future. They'll be cut out.
I think the issue of 'what if it's cheaper' is kind of misnomer in this regard. 40k is more expensive than basically every other tabletop game, all of which almost cost themselves just below 40k as a way to give them a market edge/niche. People still buy $100 space marine boxes or whatever and it's still the biggest tabletop game generally.
Totally, cost is always one factor amongst many. But a very important one, and the one that tends to ultimately drive the market. Costs are something most businesses grapple with, and it'll be their costs which pushes the shift - their need to cut them - as opposed to the buyer looking for a deal. The cheaper product only gets offered to the market after the business has already sought out the AI content to cut their own running costs. The lower price is there to lure people in, they'll happily charge the same if they have market dominance instead. So long as you pay £80 for a model, GW doesn't give a damn if an AI sculpted it or an artist. They'll keep charging the same. Unless someone else with HIPS capability gets hold of a comparable sculpting AI, and then the market begins to shift....
This message was edited 1 time. Last update was at 2023/08/06 22:26:01
2023/08/06 22:34:05
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
GW isn't losing out here. Neither is the AI company.
Most of these models, and likely the future ones based on them, are open source.
It still takes resources to build and run them so not just anyone can do it even though there are AI clients being made and released on that model, but this is our disconnect.
I'm saying that in the future, any Joe Schmo with a decent computer will be able to render their 40k movie in a matter of hours using public libraries and open-source materials.
At that point, what purpose does GW really serve? We can say they own the IP, but if the choice is between GW's AI made IP, and RandomInternetUser69's AI-made knockoff, is anyone really going to pay GW a fee for what other people are making for free?
Which in itself kind of skips over what I think is a very open question (does anyone really want that product?), but skip ahead. Skip to the point where this stuff is basically just Microsoft Office and automated to be no more difficult to run than any video game or photo editor, probably complete with AI made youtube videos explaining how to install the libraries and tweak the settings.
At this point, GW's position as an IP holder is arguably worthless if all it can do is produce the exact same quality of product at infinite times the price. They're already staring down the 3D printing barrel of wondring when that tech becomes easy and cheap to use and renders their core business model untenable. Even on the copyright infringing front, they'll never be able to control cheap 3D printed knockoff models anymore than they'd be able to control cheap AI generated 40k fan movies.
Betting their entire future on AI reliance is tantamount to a suicide pact with a timer.
This message was edited 2 times. Last update was at 2023/08/06 22:36:02
Betting their entire future on AI reliance is tantamount to a suicide pact with a timer.
Just to address your last comment first - since when was capitalism predicated on long term thinking?!
Most of these models, and likely the future ones based on them, are open source.
And that, I think, will change. It's one thing to get an AI that puts out a garbled university essay based off Wikipedia. It's another to make a publicly accessible/viewable one that's capable of interfacing with a custom-coded piece of software like a game engine or a movie studio. And another thing again to do it well. The real titans of AI and industry aren't going to want your average schmuck to be able to use their product for free. They'll want paying for it. And your average joe computer can't handle the requirements for a program like that, you'll need to utilise their servers/processing power. That isn't cheap.
No, I'm convinced you'll see a divide between homegrown AI's designed to pretend to be Joe Schmo's girlfriend in a web chat window, or even to sculpt him a 3D artwork - and the stuff that makes real money and takes real computing power. At least for the next forty years. Past that, who knows? But I don't see the stock markets looking five decades into the future when considering whether to invest in something that has real power to cut costs right now (or at least in the imminent future).
This message was edited 1 time. Last update was at 2023/08/06 22:52:54
2023/08/06 23:01:54
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Ketara wrote: The real titans of AI and industry aren't going to want your average schmuck to be able to use their product for free.
That's what I'm saying.
Thre's no money in reselling AI-generated content long term. The only person who will profit is whoever owns the AI. At the end of the day, any other parties are middlemen asking for money for basically not doing anything.
They'll probably find as much success as every schmuck who tried to sell people on Web3. They'll make some money. Some of them will get very rich. Then the whole thing will fall apart when someone makes a (probably be funny) AI-generated video mocking them for how little they do and asking why they exist at all.
And in the end, they won't survive. Only the owners of the AI will because they're the only one's providing anything actually valuable. To repeat an example; Audiobooks. Several lower-rung studios cut all their voice actors to replace them with AI. Those companies aren't going to make it. There are like, 20, AI-powered screen readers now. If you can only provide an AI voice to read a text aloud, I can do that for free. An audiobook studio that can't sell the idea of human talent and build on that, will never be able to compete with free.
Anyone wanting AI voiceovers for their novels will skip over them and go straight to the source. I would extrapolate that outward in a lot of areas. If all you can do is resell AI, you're going get cut out when people start wondering why they pay you when they can just go to the AI.
Ketara wrote: What happens when its qualitatively better than 60% of artists out there? Or 75%? Will you pay five times the price for an inferior product?
If you value art because it's the expression of a human psyche, the unique product of a real person and their talent, intelligence, training, personality, and life experience, then A.I. making a "superior" artistic product is objectively impossible. I know what you're saying, and being a pessimist, I tend to agree with you that most humans will go ahead and consume cheaper A.I. "art." But if it's not the product of a lived human experience and an individual human consciousness, it's just not the same thing as human art. It's apples and oranges, and no matter how delicious an orange may taste, it will never be a better apple than an apple.
Ketara wrote: It might be five years. It might be author's life plus fifty or seventy or whatever. But....AI is coming. People and their professions can delay it, but they can't stop it. Capitalism and technology will have their way, like they always have in the past. The trade guilds died out hundreds of years ago, and they're not coming back.
I see you're an optimist; I wish I was. I think these proto / pseudo A.I. algorhythms will wreak economic havoc and displace plenty of jobs for awhile, but the era of A.I. dominance will never be fully realized, because I doubt mankind has enough time left to develop it. Climate change will put the kibosh on human civilization long before any of our silicon children are sophisticated enough to inherit the earth from us. The presumption that technological advancement will just keep accelerating faster and faster assumes that we're going to continue living in a world of relatively stable economies, abundant food, and governable nations with well-educated populations. At the rate we're going, that doesn't look all that likely to me.
This message was edited 4 times. Last update was at 2023/08/07 05:37:12
Dakkadakka: Bringing wargamers together, one smile at a time.™
2023/08/07 07:34:57
Subject: Re:D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
As with all technological advancements, the issue is not its existence, but how we use it.
It's clear its use within capitalism is indeed what Ketara is describing. However, it only works on the assumption capitalism is unavoidable in the future. Which is far from being certain, with all the crisis ahead.
Once you remove the need of profit above all, a lot of its wrongs just have no purpose, and thus reduces the incentment of their use.
2023/08/07 07:48:36
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
LordofHats wrote: The use of generative AI can be as simple as 'draw me a cat' or as complex as producing reference material that gets so heavily touched up and covered over that it's unrecognizable from the original output. On that sliding scale, the latter isn't much different from other digital art, while the former is basically just a commission where the machine did quite literally everything.
The same is true of CGI. I can spend 5 seconds to open a downloaded cat model, click the render button, an get an image of the default view created entirely by the software. There are even procedural generation tools where I can click the "draw me a landscape" button to generate the geometry and then hit the render button. It's going to be garbage, but so will just telling an algorithm "draw me a cat" without iterating on the concept and selecting for the specific image you want. And all of my CGI images will be fully protected by copyright.
Or let's consider photography. A maximum-effort photo where I spend a ton of time with astronomy tools to figure out exactly where to stand so a solar eclipse lines up exactly where I want it on the scenery, hike 50 miles into the wilderness to get to that spot, and take a once in a lifetime photo is copyrighted. So is a quick phone shot of a billboard out the window of my car, where I just point the camera in the general direction of some text I want to remember and take whatever image I get.
The only court cases so far to render any kind of verdict essentially ruled that AI-generated images couldn't be copyrighted on the grounds that the users were insufficiently involved in their creation to warrant copyright protections.
But that's where the misunderstanding comes in. People think that "AI" is somehow conceptually different from all the other software tools used for creative purposes, that throwing a random splatter of paint at a canvas is "involving the creator" but using an "AI" to generate an image of a paint splatter is telling this anthropomorphic third party entity to make something on your behalf. If you apply the same effort standard to other things there's a lot of stuff that also shouldn't be copyrightable.
This message was edited 1 time. Last update was at 2023/08/07 07:49:00
Artist uses controversial tool to finalise some of their art.
Commissioner had no specific policy against them doing so.
There was bad press.
Now there's a policy.
This is such a non-issue, a minor bit of reputational damage for a company who seem to have declared 2023 the year of reputational damage, because their commissioning guidelines didn't keep up to date with technological reality.
AI art is here to stay. Hell, it's baked into photoshop now, it's sticking around. People can debate the wider ethics of it all they like but it's a genie that isn't going back in the bottle.
It's also somewhat amusing to see people who make their living selling etsy keycharms of their unlicensed Harry Potter art made on a pirated copy of photoshop suddenly developing a deep and profound respect for intellectual property rights
This message was edited 1 time. Last update was at 2023/08/07 08:03:32
Ketara wrote: And with every input/output, the AI gets better.
I don't think this is really true. "AI" like we're talking about here is kind of a dead end with very little room for improvement. The pattern matching algorithm used doesn't have any of the actual understanding of the material that would be required to get to producing more than short pieces that work ok at first glance, and now that AI content is starting to become more common it's running into serious "garbage in, garbage out" problems. And it has massive scaling problems as the hardware requirements to process anything beyond short clips grow beyond practical limits. ChatGPT can write a semi-convincing page of text, no hardware in existence can apply it to writing an entire novel. At most the current approach is going to replace the jobs creating mediocre filler content, where nobody cares about the quality because it's a disposable time waster that will be forgotten as soon as you scroll past it. Be afraid if you're a clickbait article "writer", don't worry if you're writing serious novels.
(And yes, if you consider a long enough time scale eventually AI will do impressive things and duplicate every human job. But "we'll get there within 100,000 years because virtually anything is possible on that time scale" isn't anything to worry about.)
Automatically Appended Next Post:
Ketara wrote: Will you pay five times the price for an inferior product?
Of course people will. People already pay orders of magnitude more than the cost of a high quality print to own original paintings. If I'm paying $500 to own the original instead of $5 for a print why do I care if an AI can make a similar print for $1?
Where AI is a threat is the disposable stuff. Nobody cares who made the pictures on a cereal box because kids don't know any better and parents just buy whatever their kids want. Hardly anyone cares who made the illustrations in a D&D book because they exist to give you an idea of how to describe a character/building/whatever to the players, not because you really value the art and will ever look at it again once you're done with the game. And those are the places where it's already a race to the bottom to get the cheapest possible work that will meet the (low) quality standards. And AI isn't really any more of a threat than stock photo sites that will sell you pre-made images for those cases or paying an artist in China $1/hour for commission work.
This message was edited 2 times. Last update was at 2023/08/07 08:13:40
What I'm getting from this (fascinating) thread is that AI is wraithbone and artists are about to become Eldar Bonesingers.
ThePaintingOwl wrote: Hardly anyone cares who made the illustrations in a D&D book because they exist to give you an idea of how to describe a character/building/whatever to the players, not because you really value the art and will ever look at it again once you're done with the game.
Nitpicking here, but that hasn't always been D&D's approach. Time was when they hired specific artists with distinctive and recognisable styles to differentiate each setting (e.g. DiTerlizzi for Planescape, Brom for Dark Sun). A lot of that art can be admired for its own sake.
2023/08/07 08:49:51
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Zenithfleet wrote: Nitpicking here, but that hasn't always been D&D's approach. Time was when they hired specific artists with distinctive and recognisable styles to differentiate each setting (e.g. DiTerlizzi for Planescape, Brom for Dark Sun). A lot of that art can be admired for its own sake.
True. I'm not as familiar with the earliest days of D&D and it's certainly possible to have game-related art that people love for its own sake, rather than being disposable page filler. But everything I've seen of the current D&D art is entirely forgettable generic fantasy stuff that does its job well enough but doesn't really do anything beyond that.
Automatically Appended Next Post:
Zenithfleet wrote: What I'm getting from this (fascinating) thread is that AI is wraithbone and artists are about to become Eldar Bonesingers.
Pretty much. It's a useful tool, but getting good quality out of it requires using the tool to refine the output over many iterations (plus using other tools for finishing work), not just telling it what to produce. And it will require a decent understanding of how the tool works and how to use it most effectively.
This message was edited 1 time. Last update was at 2023/08/07 08:54:13
Zenithfleet wrote: What I'm getting from this (fascinating) thread is that AI is wraithbone and artists are about to become Eldar Bonesingers.
ThePaintingOwl wrote: Hardly anyone cares who made the illustrations in a D&D book because they exist to give you an idea of how to describe a character/building/whatever to the players, not because you really value the art and will ever look at it again once you're done with the game.
Nitpicking here, but that hasn't always been D&D's approach. Time was when they hired specific artists with distinctive and recognisable styles to differentiate each setting (e.g. DiTerlizzi for Planescape, Brom for Dark Sun). A lot of that art can be admired for its own sake.
The art itself was flawed as well, not something that should have been handed in. Likely it slip though since they were an artist that has been working with them for nearly 10 years, but should have been sent back.
Also there in house artist doesn’t have the time for a full book, so she seems to do more concept work for some I believe.
Also, what. A lot of the modern D&D art is beautiful and distinct to the setting they depict.
There is a lot of style and the different artists do express themselves within the style.
This message was edited 1 time. Last update was at 2023/08/07 08:59:04
2023/08/07 09:48:19
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Trouble with capitalisam Is that it is an economic system that people tie to values.
For example most people believe that rich people somehow earned their money. This has been proven false many times over and yet if your rich most people don't think of you as lazy.
Billions of dollars in our society means that you have thousands of times more value than a poor person, politically, socially, most people attribute having riches with being smart ect.
None of this has ever been true, no one is worth thousands of times more than any other person. Superman dosent exist. And yet people still beleve the rich deserve more simply becuse they have more.
You can set up a model of capitalism. It's been done many times, always plays the same. A few end up with much and masses end up with little or nothing.
As an economic system it kinda works, but as a value system it is terrible, it promotes behavior that is detrimental to society.
If people find out what rich people already know, eg: the fact that they(rich people) don't do anything special, they will start to seriously ask why is it that they have so little.
When you can tell a computer to do almost any task and it does it for you what have you done that's valuable? Well that's what rich people do everyday they tell someone else to do somthing for them. They tell someone with expertise in finance to handle their money, they tell someone who is good at gardening to take care of their lawn, they tell their cook what they want to eat.
They provide nothing and no value to society, at some point in the past they may have but they no longer do.
the only thing AI will do is open people's eyes to the obvious. If a machine is doing all the work then why do you have so little? It's a great question and I wounder how this all will play out over the next 20 years.
As AI advances will they try to hid it? Will they try to hid behind silly laws like new copyright laws, will society become more equitable since we all help train the AI's. Will the rich try to reprioritise religion (if someone is wealthy they where given that wealth and power by god).
So many options for society.
This message was edited 2 times. Last update was at 2023/08/07 09:49:44
2691/04/07 09:53:13
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Ketara wrote: Your funeral mate. At least, if you're in an affected trade. My older brother is in training to drive lorries. Despite them just introducing self-driving buses up north, and lorries going down predictable motorways for most of their journeys, my father is adamant he won't be replaced. Because he doesn't want to conceptualise the mass unemployment self-driving vehicles are going to drive in those two (and possibly other) industries. Sometimes a desire to avoid technological movement is more down to wanting to deny reality than it is based in realism.
You and I must live in different worlds where self-driving vehicles aren't complete rubbish. Come back to me when Tesla can make a self-driving car that doesn't use the Top Gear Highway Code points system for identifying pedestrians.
This message was edited 1 time. Last update was at 2023/08/07 09:53:48
2023/08/07 10:12:42
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Talking Banana wrote:
If you value art because it's the expression of a human psyche, the unique product of a real person and their talent, intelligence, training, personality, and life experience, then A.I. making a "superior" artistic product is objectively impossible. I know what you're saying, and being a pessimist, I tend to agree with you that most humans will go ahead and consume cheaper A.I. "art." But if it's not the product of a lived human experience and an individual human consciousness, it's just not the same thing as human art. It's apples and oranges, and no matter how delicious an orange may taste, it will never be a better apple than an apple.
Very much so. I'm talking more about what could be called 'professional' art. Stuff made to commission. Whether its a picture of a robin in a sweater to go on a christmas card, or an orc in a fantasy rulebook, it's the more commercial grade artistic activity/expression AI threatens. Tracey Emin draping used condoms around her bedroom or 'Eggman' graffitting London are pretty safe. The ones who like to sell their drawn or sculpted art will be the ones in trouble, because even if the rest of the world appreciates that kind of thing as you do above, it rarely pays the bills as it is.
Sarouan wrote:As with all technological advancements, the issue is not its existence, but how we use it.
It's clear its use within capitalism is indeed what Ketara is describing. However, it only works on the assumption capitalism is unavoidable in the future. Which is far from being certain, with all the crisis ahead.
This is very true. It may be that in some idealistic utopian future society, all these labour saving devices actually benefit mankind as a whole. So far though, what we tend to see is that 90% of the
savings flow in one direction (to the rich), and those who keep their jobs get additional imaginary work shovelled on them to force them back to the 9-5. It's sad, because the potential of things like this are overshadowed by the economic harm they inflict. Modern computing should have liberated vast reams of people from drudgery, but nothing has changed except there are less jobs for the white collar worker, and those which survived are devalued and more removed than ever from valuable output.
ThePaintingOwl wrote: "AI" like we're talking about here is kind of a dead end with very little room for improvement. The pattern matching algorithm used doesn't have any of the actual understanding of the material that would be required to get to producing more than short pieces that work ok at first glance, and now that AI content is starting to become more common it's running into serious "garbage in, garbage out" problems. And it has massive scaling problems as the hardware requirements to process anything beyond short clips grow beyond practical limits. ChatGPT can write a semi-convincing page of text, no hardware in existence can apply it to writing an entire novel. At most the current approach is going to replace the jobs creating mediocre filler content, where nobody cares about the quality because it's a disposable time waster that will be forgotten as soon as you scroll past it. Be afraid if you're a clickbait article "writer", don't worry if you're writing serious novels.
Producing predictable and consistent content to order is the basis for a huge number of industries. We don't need AI to be capable of inventing a new rocketship to Mars for it to continually improve at what it does and disrupt large sectors of the economy. Remember, it's not just content input that matters - but also coding tweaks and human feedback. Every time the software is refined, and every time a human party tells the AI 'stuff like this is wrong/not good enough - and this is why', it gets better at the returns.
Zenithfleet wrote:What I'm getting from this (fascinating) thread is that AI is wraithbone and artists are about to become Eldar Bonesingers.
This, funnily enough, is an accurate way of putting it! Those jobs which produce consistent and standardised output through a computer interface (or which can be mechanised to be done from one), will be largely replaced by AI. But perhaps 10-20% of the same amount of jobs will be created in 'AI experts' in this or that. AI writing, AI drawing, AI movie making. People who will talk to the AI, locate and input suitable datasets, refine the output, and so on. It's them who will be the metaphorical 'bonesingers', sculpting and filtering and concentrating what the AI produces.
Who will our AI Bonesingers work for? It'll probably vary. You'll get large companies who sponsor their own custom in-house version (think Bethesda's Creation Engine or Adobe and their graphic design suite). You'll have hardcore computing companies making large multi-purpose AI's which they license out and act as consultants for. And you'll probably have some small indie made open source AI's for home/small business users which aren't as big or powerful, but can still do smaller jobs sufficiently well enough.
The people who will lose out will be the people who previously commanded a salary/income off the skills these AI can emulate. Professional writers, professional artists, professional music writers, and so on. A small proportion will survive doing what they did, another small proportion will survive by pivoting to be the new 'AI Bonesingers' (who better to judge artistic output than an artist?), but the vast majority will lose their income and have to look elsewhere for their daily crust. Something a lot of artists have to do anyway, if we're honest. There's always been at a hundred wannabes/would have beens for every successful one. AI will just thin the herd even more.
This message was edited 2 times. Last update was at 2023/08/07 10:13:54
2023/08/07 10:13:12
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
I personally don't think that the big companies like Wizards or GE (within their markets of course. They're not very big companies on a broader scale) will not be the ones that can seriously be undercut by AI in the foreseeable future because they have huge network value. Why do people get DnD? There are tons of games on the market. Cheaper ones and, depending on your standards, better ones. But DnD offers something much more special. It has a huge player base. Wherever you are it's much easier to find other players than for, for instance, Dungeon Destroyers or whatever. So, that's a vital part of the business. It's the same with wargames. There are tons of cheaper games with cheaper miniatures that in the eyes of many look better too. But, 40k for instance has s gute player base whereas other games don't and that's not something I expect to be changing soon.
AI art however is a fascinating topic and I think that there will certainly be improvements. The way the current AI works however isn't something that lends itself well to longer works because it lacks a fundamental understanding of what it's doing. That means that it for instance can't create anything larger or that doesn't already exist. I also don't see that improving too much because that's an issue that can't be solved with the current approach.
Even if you train an AI on your own art, how will you know what art it was trained on during its development, and whether those artists gave permission or were compensated?
By starting with a blank slate and only training it on your art.
You can't start with a blank slate. That algorithm the AI uses to generate art was developed by training it on other people's art. There's no separating prior development from questions of fair use of the final product.
Unlike models like DALL-E, Stable Diffusion makes its source code available,[52][8] along with the model (pretrained weights)
but without the training model applied to it and then you train it on your data only. That's why I wrote that it wouldn't be good enough as the data set would be too limited to give good results but you would end up with an AI model that's based on stuff you own the rights to and nothing else. That knowledge of the underlying algorithm is also used by people to investigate how those AI models copy work while trying to cover their tracks and helps "glazing" algorithms be developed to disrupt AI models from "learning" from art that gets published online: https://glaze.cs.uchicago.edu/what-is-glaze.html
Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
But you ask, why does this work? Why can't someone just get rid of Glaze's effects by 1) taking a screenshot/photo of the art, 2) cropping the art, 3) filtering for noise/artifacts, 4) reformat/resize/resample the image, 5) compress, 6) smooth out the pixels, 7) add noise to break the pattern? None of those things break Glaze, because it is not a watermark or hidden message (steganography), and it is not brittle. Instead, think of Glaze like a new dimension of the art, one that AI models see but humans do not (like UV light or ultrasonic frequencies), except the dimension itself is hard to locate/compute/reverse engineer. Unless an attack knows exactly the dimension Glaze operates on (it changes and is different on each art piece), it will find it difficult to disrupt Glaze's effects. Read on for more details of how Glaze works and samples of Glazed artwork.
Besides Stable Diffusion there are other open source versions that could be used. You need to train it on your own data and/or not use trained version where you don't know what they were fed to get a clean version.
There have been several attempts to create open-source implementations of DALL-E.[52][53] Released in 2022 on Hugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL-E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery.[54][55][56]
The issue is that you'd really have to go with a clean room design approach to make sure nobody else's copyrights got infringed on but the result most probably wouldn't be as versatile as models that are trained on everything they can find which, funnily enough, is itself causing a few issues as AI models are accidentally now also being trained on random art that was made by AI models (but they don't know it) thus creating some sort of inbreeding in the trained model.
2023/08/07 20:49:05
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
You're focusing on the wrong components that define expression.
Put it another way.
If I go to reddit now and post a commission for an oil painting of a cat, do I own the copyright when RedditUser71 posts an oil painting of a cat?
The answer is I don't, and that's the legal hole some AI uses are falling into. If someone had the copyright, it would hypothetically be the AI as the producer of the image. Except legally, AI isn't a person, and only persons can own copyright.
So the end result is that (depending) an AI image isn't currently copyrightable due to a confluence those two factors.
But that's where the misunderstanding comes in.
Whose?
There's the misunderstanding and the talking past each other element about AI uses. There's also the misunderstanding and talking past each other about what copyright actually is and what it protects. People have had the ability to autogenerate images for a long time and those have never been copyright protected well before things like Midjourney. Most of those services even had ULAs and TOS that explicitly stated such.
People think that "AI" is somehow conceptually different from all the other software tools used for creative purposes,
That's because it is. The argument that it's not is just semantic bs and word games. No digital software prior to generative AI was capable of completely automating high resolution and quality images like this, or impacting the market in such a way that it can completely redefine whole industries.
If you apply the same effort standard to other things there's a lot of stuff that also shouldn't be copyrightable.
There in fact is a lot of things people think are copyrightable but aren't.
The people who will lose out will be the people who previously commanded a salary/income off the skills these AI can emulate. Professional writers, professional artists, professional music writers, and so on. A small proportion will survive doing what they did, another small proportion will survive by pivoting to be the new 'AI Bonesingers' (who better to judge artistic output than an artist?), but the vast majority will lose their income and have to look elsewhere for their daily crust. Something a lot of artists have to do anyway, if we're honest. There's always been at a hundred wannabes/would have beens for every successful one. AI will just thin the herd even more.
Worth pointing out the synthesizer hypothetically rendered orchestras irrelevant, but there are probably more professional cellists alive today than at any prior point in history. More professional painters too, despite the advent of the camera.
My suspicion is the 'doom of all art' is a knee-jerk overreaction that probably won't pan out. To say nothing of how I've been watching writing AIs, and they're pretty god awful even before you notice the training data they're using for them is fanfiction sites. The lowest rungs are probably in trouble. If you're a writer for a rag like Buzzfeed... Well, you were basically a bot anyway. Journalists were already being screwed, though there's no real benefit to that for anyone just the reality that people don't want to pay for quality journalism so the market has narrowed to barely anyone.
But there's still probably going to be a divide between human and AI made if only in the sense that lots of people have zero interest in AI-made material. Writing is probably the biggest example, honestly. I've seen profoundly almost no interest in reading AI books. People buy them by accident, and most of the interest is from people who think they can get rich off of it or think copy/paste is the skillset of a 'writer.' Somehow I just doubt the long-term success of something people are going to spend hours on, but that was produced by someone too lazy to actually write anything themselves (and will absolutely be too lazy to edit or check it, something I have a few funny stories about).
IMO, video games and books are different worlds. AI will open huge doors in interactive media. In older media forms, I think it's going to be signal-noise more than anything. Even visual arts, for all the disruption, haven't become a wasteland of out-of-work freelance artists because at the end of the day, fascination with AI hasn't necessarily translated into waning interest in human creativity. Not even in the professional world, given Wizards doesn't want commissioned artists handing it something any schmo with a Deviant Art account could have given them.
If you're a writer for a rag like Buzzfeed... Well, you were basically a bot anyway.
Let me tell you a story about someone who had AI write their Harry Potter fanfic.
The AI couldn't keep track of whether or not Harry was gay, straight, trans, white, black, or asian. Hilariously, it fluidly moved between basically every race and orientation, since it was trained on AO3 and gender/race flips are so frequent in fanfiction (and current generative AI incapable of true thought) that the system couldn't keep it straight even in the same prompt.
And then there's the part where it came out oddly smutty even though it wasn't prompted for that since a huge section of the fanfic community is smut XD The AI actually employed Omegaverse tropes without promting. Apparently, Omegaverse fics are sufficiently weighty in the dataset that it seeped into generic prompts!
This message was edited 4 times. Last update was at 2023/08/07 21:09:48
> It's perfectly legal for a human artist to mimic the techniques of another artist and none of the actual images in the training dataset go into the finished product.
It is, until someone throws a lawsuit at you. As a small business, you have better things to do than hire a lawyer whose area of work is AI art, and I'm sure that's a small field.
I do notice AI art popping up on YouTube videos, probably because much of the content already there is IP-murky, anyway.
Conversely, suppose you're a photographer or artist, and some stock image company or multimedia company adds to their boilerplate contract that they may use your images with AI. Do you walk away? Maybe you will -- but then someone else will take the photo or draw the picture instead.
Ketara wrote: Producing predictable and consistent content to order is the basis for a huge number of industries.
But do we care about the loss of those industries? I'm sure there's a huge market for clickbait "writers" that take an AskReddit thread and turn it into 16 OF THE SEXIEST WAYS PEOPLE HAVE EVER SEXED YOU WONT BELIEVE WHAT HAPPENED TO #13 with an ad page between every single sentence, but do we really care if that kind of garbage gets automated away by AI?
Every time the software is refined, and every time a human party tells the AI 'stuff like this is wrong/not good enough - and this is why', it gets better at the returns.
We can't do this, which is what I meant about the current "AI" approach being a dead end. You can't tell the algorithm "this is why" because the algorithm has no understanding, it merely does the equivalent of a google search on the space of possible things and filters for elements that match the search term. This is why "AI" is semi-decent at providing rough sketches that look ok at first glance but completely fails at writing code, constantly making errors that even CS 101 students would find obvious. It knows in the training set that "IF" and "FALSE" are often found together, it has no idea that IF(FALSE) is nonsensical while IF(X==FALSE) is a reasonable thing to do. So there's a limit to how much refinement you can do and the current approach is never going to get to true human-level creativity or understanding.
This message was edited 1 time. Last update was at 2023/08/08 00:23:44
ced1106 wrote: > It's perfectly legal for a human artist to mimic the techniques of another artist and none of the actual images in the training dataset go into the finished product.
It is, until someone throws a lawsuit at you. As a small business, you have better things to do than hire a lawyer whose area of work is AI art, and I'm sure that's a small field.
I do notice AI art popping up on YouTube videos, probably because much of the content already there is IP-murky, anyway.
Conversely, suppose you're a photographer or artist, and some stock image company or multimedia company adds to their boilerplate contract that they may use your images with AI. Do you walk away? Maybe you will -- but then someone else will take the photo or draw the picture instead.
There's an entire body of law that is basically all about where copyright starts and where fair use is (worth noting, a lot of generative AI is likely to fail fair use tests in courts, which have already had suits filed). There was a big case just this year concerning a magazine cover or some such and the question of what qualifies as a transformative work for copyright purposes.
Turns out putting a blue filter over a previously copyrighted image isn't enough, though I would have said that was obvious. Some companies just like losing million dollar lawsuits over their own stupidity.
LordofHats wrote: If I go to reddit now and post a commission for an oil painting of a cat, do I own the copyright when RedditUser71 posts an oil painting of a cat?
You likely would, as IP ownership for commission work normally goes to the person who paid for it to be done and not to the creator.
Aside from that this is what I mean about anthropomorphizing the AI. The AI is not person doing a task on behalf of someone requesting it, it's no different from using a procedural cat generator in photoshop to make a cat image. A human creates an image using software tools and that human has all IP rights to it (unless contracts state otherwise).
That's because it is. The argument that it's not is just semantic bs and word games. No digital software prior to generative AI was capable of completely automating high resolution and quality images like this, or impacting the market in such a way that it can completely redefine whole industries.
Quality and market impacts are irrelevant to IP questions. AI certainly changes the situation outside of IP law but that isn't what we were talking about.
LordofHats wrote: I'm saying that in the future, any Joe Schmo with a decent computer will be able to render their 40k movie in a matter of hours using public libraries and open-source materials.
LOLno. It will never happen. To put things in perspective, today, to make HD 3D image with good quality (especially involving fur/hair/reflections) you need so insane amounts of computing time a lot of 3D artists are using rendering farms, free or commercial, despite having top end computers. For still image. Home computers will never compete with gear found in professional companies, no matter if used by experts or good AI, who will only make the difference even more drastic compared to amateur user/home AI output. You might say this 'amateur movie' will be 'good enough' and for some people it might be well true, but for most, after seeing professional work in cinema, this generated movie will look about as good as the efforts of Ethiopian filmmakers in the 70s will look to you now. How many movies of that caliber did you watch recently, again?
LordofHats wrote: And in the end, they won't survive. Only the owners of the AI will because they're the only one's providing anything actually valuable. To repeat an example; Audiobooks. Several lower-rung studios cut all their voice actors to replace them with AI. Those companies aren't going to make it. There are like, 20, AI-powered screen readers now. If you can only provide an AI voice to read a text aloud, I can do that for free. An audiobook studio that can't sell the idea of human talent and build on that, will never be able to compete with free.
Anyone wanting AI voiceovers for their novels will skip over them and go straight to the source. I would extrapolate that outward in a lot of areas. If all you can do is resell AI, you're going get cut out when people start wondering why they pay you when they can just go to the AI.
On what planet free/trial version of AI reader will be better than paid, commercial version this studio will be using, with work being done with sound engineer/director who will get far better output even without having good tools than your random user?
GIMP is free. Compare art made on GIMP by amateurs vs professionals using latest [insert big commercial brand] and you will get a clue about vast gulf that will always separate the two. Hell, people still pay professionals to do their minis despite craft paints and school brushes being nearly free, I wonder why if price is the only determining factor?
ThePaintingOwl wrote: (And yes, if you consider a long enough time scale eventually AI will do impressive things and duplicate every human job. But "we'll get there within 100,000 years because virtually anything is possible on that time scale" isn't anything to worry about.)
Except 90% of the jobs people are doing right now are easily automated, repeated tasks that require little creativity, aka stuff perfect for AI. And you don't need 100,000 years to make them go away. More like 10 years if I were to bet. First example, ask horses how that newfangled 'engined carriage' stuff went. First, people laughed at it too. Then it got big boost by wartime spending. Then increased popularity led to infrastructure devoted to it making transition easier. And then everything came crashing down all at once, in just 2-3 decades, and 99% of the horses suddenly became surplus and their next job offer involved salami...
Gert wrote: You and I must live in different worlds where self-driving vehicles aren't complete rubbish. Come back to me when Tesla can make a self-driving car that doesn't use the Top Gear Highway Code points system for identifying pedestrians.
Tesla is trash. Their assisted driving isn't even in the top 10. Look at what Baidu is doing right now and repeat that again with straight face, they are already making cars without controls working in robotaxi network and their tech is only going to get better:
2023/08/08 00:40:34
Subject: D&D - "On AI-generated art and Bigby Presents: Glory of the Giants"
Irbis wrote: Except 90% of the jobs people are doing right now are easily automated, repeated tasks that require little creativity, aka stuff perfect for AI. And you don't need 100,000 years to make them go away. More like 10 years if I were to bet. First example, ask horses how that newfangled 'engined carriage' stuff went. First, people laughed at it too. Then it got big boost by wartime spending. Then increased popularity led to infrastructure devoted to it making transition easier. And then everything came crashing down all at once, in just 2-3 decades, and 99% of the horses suddenly became surplus and their next job offer involved salami...
I don't think you understand how much of a dead end the current approach to "AI" is.
ThePaintingOwl wrote: You likely would, as IP ownership for commission work normally goes to the person who paid for it to be done and not to the creator.
First off, IP != copyright. Intellectual property is a broader category, additionally encompassing patents and trademarks. Copyright explicitly applies to the creator of a copyrightable item. It doesn't transfer to any by mere virtue of being commissioned. The person who makes the image owns the copyright and copyright is (hypothetically) automatically inferred the moment the image is created.
I certainly can't slap my name on it and say it's my work created by my hand (even James Patterson doesn't go that far and he could probably get away with it. That's called plagiarism.
Aside from that this is what I mean about anthropomorphizing the AI.
I'm not. You keep saying that but you're kind of talking past me.
A human creates an image using software tools and that human has all IP rights to it
Correct, but underlying that notion is the idea that the human created the image. Copyright is essentially built on the notion of work product. Emphasis on work. None of these laws kick in until an expression has been created. Simple ideas don't qualify.
That's the issue.
If your sole contribution to a piece is the idea of it, you have no IP rights. Ideas have no protections under the law. The idea has to be given an expression. If a machine has actually created the expression, you end up at a legal null zone, where different aspects of IP law result in no one owning a copyright to result.
You can't patent an idea. You have to have a practical invention and a way to execute before the patent office will give you a patent. You can't copyright the idea for a book either. You have to actually have the book. Underlying these protections is the notion that they're creation was your labor, which is why I can copyright a 5000 short story I wrote, but not a 5000-word epic poem I asked someone else to write.
Quality and market impacts are irrelevant to IP questions.
That's delusional, to be honest.
The entire notion of IP rights from trademarks to copyrights is built on the notion that people have a right to the fruits of intellectual labor, including the right to sell it and control its distribution. The real world is messier but it's silly to think these issues are magically disconnected.
AI certainly changes the situation outside of IP law but that isn't what we were talking about.
That is what I'm talking about. If you're talking past me, that happens, and I point out again you're understanding of copyright law is even looser than mine.
Irbis wrote: GIMP is free. Compare art made on GIMP by amateurs vs professionals using latest [insert big commercial brand] and you will get a clue about vast gulf that will always separate the two. Hell, people still pay professionals to do their minis despite craft paints and school brushes being nearly free, I wonder why if price is the only determining factor?
Most of the examples you give are driven by time and cost factors as much as quality factors. Generative AI will only get faster with time and it's already inexpensive for the user.
Except 90% of the jobs people are doing right now are easily automated, repeated tasks that require little creativity, aka stuff perfect for AI.
The biggest obstacle to a lot of these things isn't even that automating them is hard.
Walmart could easily buy a robot to stock the shelves in its stores and it would probably do the job better than a human employee.
The obstacle is cost and function. Those stores arn'te built to be traversed by any sort of affordable robot (for now) and it's not like Walmart stockers are well paid enough to warrant the upfront cost of replacing them with automation. Realistically, redesigning most stores now would probably make automating most of the commercial fronts of the economy trivial, but no one wants to pay the upfront cost of leading the charge on that transition.
This message was edited 14 times. Last update was at 2023/08/08 01:26:32