Dear HN. I remember how many here were bullish on Blockchain back when it first came out 10 years ago, and these days any news about Blockchain or Smart Contracts is rightly viewed with suspicion, and downvoted. I propose that the same will happen with Generative AI. I have delved into the space in the last few months, I have built applications leveraging ChatGPT at scale, put in as many ethics guardrails as I could without destroying the application’s purpose, and this is my “Moxie Marlinsike writes about Web3” moment. In Web3, the early adopters got the most rewards, from later adopters, but by and large the applications of the technology didn’t generate much value for society at all. I have come to believe that Generative AI deployed at scale, this is even worse. My claim is that: Nearly all applications of Generative AI APIs by their very design and value proposition externalize cost to society – resulting in a net negative that grows superlinearly the more they are used at scale . The core value proposition is to leverage the short-term profit motive, like any new tech. It is to generate work at scale that passes for real humans doing the work. If ChatGPT generates an essay for you, or a homework assignment, that is fake. You didn’t do your homework. You didn’t write the essay. You’re passing it off like you did. If MidJourney “painted” your painting, and you pass it off as your own to others, you’re lying. This guy won a photography contest with a non-real photo: https://www.cbsnews.com/amp/news/artificial-intelligence-photo-competition-won-rejected-award-ai-boris-eldagsen-sony-world-photography-awards/ He was honest enough to reveal it and reject the prize. But had he not done that, humans and honest photos would be out of the running just as if a chessplayee with a hiddenchess engine entered a tournament. It’s cheating, plain and simple. It’s bad enough to cheat yourself, but using the API of any generative AI service is an incentive to cheat many people, in ways that can destroy our trust in ALL content and interactions online. I propose a law that people should not be able to distribute works that leverage Generative AI without fully disclosing it. There can be exceptions for small-time operations with a small volume. To avoid slippery slope arguments, I’d mandate a “BuiltWith” thing all the way down to the compilers you used for code, graphics editors for images, and what you used for sounds etc. Like Ingredients that FDA demands be listed on locally produced food in USA, but for a different reason. Also, having a human in the loop to spot-check accuracy and be responsible for accuracy can earn a specific certification. An AI “describing” a product it has never seen is inherently as fake as a photograph of a scene that never happened. There should be explicit penalties for passing off AI-generated work without honest disclosures, the bigger the scale the bigger the penalty.
Story Published at: April 24, 2023 at 03:44PM
Story Published at: April 24, 2023 at 03:44PM